Jump to main content

Entropies and the Anthropocene crisis

AI and society

Entropy is a transversal notion to understand the Anthropocene, from physics to biology and social organizations. For the living, it requires a counterpart: anti-entropy.

Abstract

The Anthropocene crisis is frequently described as the rarefaction of resources or resources per capita. However, both energy and minerals correspond to fundamentally conserved quantities from the perspective of physics. A specific concept is required to understand the rarefaction of available resources. This concept, entropy, pertains to energy and matter configurations and not just to their sheer amount. <br> However, the physics concept of entropy is insufficient to understand biological and social organizations. Biological phenomena display both historicity and systemic properties. A biological organization, the ability of a specific living being to last over time, results from history, expresses itself by systemic properties, and may require generating novelties The concept of anti-entropy stems from the combination of these features. We propose that Anthropocene changes disrupt biological organizations by randomizing them, that is, decreasing anti-entropy. Moreover, second-order disruptions correspond to the decline of the ability to produce functional novelties, that is, to produce anti-entropy.

Keywords: entropy, anti-entropy, resources, organization, disruption, Anthropocene


Entropies and the Anthropocene crisis

Maël Montévil

Abstract

The Anthropocene crisis is frequently described as the rarefaction of resources or resources per capita. However, both energy and minerals correspond to fundamentally conserved quantities from the perspective of physics. A specific concept is required to understand the rarefaction of available resources. This concept, entropy, pertains to energy and matter configurations and not just to their sheer amount.

However, the physics concept of entropy is insufficient to understand biological and social organizations. Biological phenomena display both historicity and systemic properties. A biological organization, the ability of a specific living being to last over time, results from history, expresses itself by systemic properties, and may require generating novelties The concept of anti-entropy stems from the combination of these features. We propose that Anthropocene changes disrupt biological organizations by randomizing them, that is, decreasing anti-entropy. Moreover, second-order disruptions correspond to the decline of the ability to produce functional novelties, that is, to produce anti-entropy.

Keywords: entropy, anti-entropy, resources, organization, disruption, Anthropocene

1 Introduction

Despite cases of denial, citizens and governments increasingly acknowledge the Anthropocene as a crisis. Nevertheless, this crisis requires further theoretical characterization. For example, geological definitions of the Anthropocene mostly build on human productions that could be found in future geological strata with indicators such as chicken bones, radionuclides, and carbons. However, these operational definitions for stratigraphy do not contribute much to understanding the underlying process and how to produce the necessary bifurcations. Beyond stratigraphy, in the second “warning to humanity” signed by more than 15000 scientists, the arguments are strong but build mostly on a single line of reasoning. The authors exhibit quantities that are growing or shrinking exponentially (Ripple et al2017), and it stands to reason that such a trend cannot persist in a finite planet. This line of reasoning is commonplace in physics and shows that a change of dynamics is the only possibility. For example, the said quantities may reach a maximum, or the whole system may collapse. However, are these lines of reasoning sufficient to understand the Anthropocene crisis and respond adequately to it?

Several authors have specified the diagnosis of the Anthropocene. They argue that this crisis is not a result of the Anthropos sui generis, but the result of specific social organizations. Let us mention the concept of capitalocene for which the dynamics of capital is the decisive organizational factor (Moore2016). The capital opened the possibility of indefinite accumulation abstracted from other material objects. Along a similar line, the concept of plantationocene posits that the plantation is the damaging paradigm of social organizations and relationships to other living beings (Haraway2015Davis et al2019). In both cases, the focus is on human activities and why they are destructive for their conditions of possibility. These accounts provide relevant insights, but we think they are insufficient in their articulation with natural sciences.

To integrate economics and natural processes, Georgescu-Roegen (1993) emphasized the theoretical role of entropy in physics. Economists should part with the epistemology of classical mechanics where conservation principles and determinism dominate. In thermodynamics, the degradation of energy is a crucial concept: the irreversible increase of entropy. Methodologically, the implication is that economists should take into account the relevant knowledge about natural phenomena instead of working on self-contained mathematical models representing self-contained market processes.

This work has been reinterpreted by Stiegler (20182019). B. Stiegler argues that the Anthropocene’s hallmark is the growth of entropies and entropy rates at all levels of analysis, including the biological and social levels. In this paper, we will discuss several aspects of this idea, focusing on mathematized situations or situations where mathematization is within sight. Entropy leads to a shift from considering objects that are produced or destroyed — even energy is commonly said to be consumed — to considering configurations, organizations, and their disruptions.

We first explain why entropy is a critical concept to understand the “consumption” of energy resources. We provide a conceptual introduction to the thermodynamic concept of entropy that frames these processes in physics. We will also discuss resources like metals and argue that the property impacted by biological and human activity is not their amount on Earth but their configuration. Concentrations of metals increase when geological processes generate ore deposits. On the opposite, the use of artifacts can disperse their constituents. Last, compounds dispersed in the environment can be concentrated again by biological activities, leading to marine life contamination with heavy metals, for example.

To address biological organizations and their disruptions, we first develop several theoretical concepts. The epistemological framework of theoretical biology differs radically from equilibrium thermodynamics — and physics in general. We introduce the concepts of anti-entropy and anti-entropy production that mark a specific departure from thermodynamic equilibrium. We show that they enable us to understand critical destructive processes for biological and human organizations.

2 Entropy in physics and application to available resources

In this section, we will discuss two kinds of resources relevant to the economy and show that the proper understanding of these resources requires the concept of entropy in the physical sense of the word. The first case that we will discuss is energy, and the second is elements such as metals.

2.1 Energy and entropy

The stock of energy resources is commonly discussed in economics and the public debate. However, it is a fundamental principle of physics that energy is conserved. It is a physical impossibility to consume energy stricto sensu. For example, the fall of a ball transfers potential energy into kinetic energy, and if it bounces without friction, it will reach the initial height again, transforming kinetic energy back into potential energy. This remark is made repeatedly by physicists and philosophers but does not genuinely influence public discourses (Mosseri and Catherine2013). Georgescu-Roegen (1993) and authors who built on his work are an exception.

To dramatize the importance of this theoretical difficulty, let us mention that the increase in a body’s temperature implies increased internal energy. Heat engines, including thermic power plants, are a practical example of this: they transform heat into useful work (e.g., motion). We are then compelled to ask an unexpected question. Why would climate change and the subsequent increases in temperature not solve the energy crisis?

2.1.1 Thermodynamic entropy

The greenhouse effect keeps the energy coming from the Sun on Earth, and at the same time, the shrink of resources such as oil leads to a possible energy crisis. The main answer to this paradox is that not all forms of energy are equivalent.

Let us picture ourselves in an environment at a uniform temperature. In this situation, there is abundant thermic energy environing us, but there are no means to generate macroscopic motions from this energy. We need bodies at different temperatures to produce macroscopic motions. For example, warming up a gas leads to its expansion and can push a piston. If the gas is already warm, it cannot exert a net force on the said piston. It is the warming up of the gas that generates usable work, and this process requires objects with different temperatures.

An engine requires a warm and a cold source, a temperature difference. This rationale led to design cycles where, for example, a substance is warmed up and cooled down iteratively. These cycles are the basis of heat engines. XIXth century physicists, in particular, Carnot and Clausius, theorized these cycles. When generating macroscopic motion out of thermic energy, the engine’s maximum efficiency is limited, and physicists introduced entropy to theorize this limitation.1 The efficiency depends on the ratio of temperatures of the cold and the warm sources. When the temperatures tend to become equal, the efficiency decreases and tends to zero. As a side note, nuclear power plants use the same principle, where the warm source result from atomic fission, and the cold source is a river or the sea. It follows that the higher the temperature of their surroundings is, the less efficient they are. Incidentaly, it also follows that nuclear powerplant are often close to the sea, which can lead to some problems in a context were the sea level is expected to rise.

Now, let us consider warm water and cold water and pouring them together in a pot. After some time, the water will reach a uniform temperature, and we have lost the chance to extract mechanical work out of the initial temperature difference. This phenomenon is remarkable because it displays a temporal direction: we have lost the ability to do something. Theoretically, this kind of phenomenon defines a time arrow that classical mechanics lacks.2 Likewise, it is possible to generate heat out of mechanical work by friction, including in the case of electric heaters, but, as we have seen, the opposite requires two heat sources at different temperatures.

Following the first principle of thermodynamics, energy is a conservative quantity. Being conservative is a different notion from being conserved. A conserved quantity does not change over time in a system. For example, the number of water molecules in a sealed bottle is conserved. This property pertains both to the quantity discussed and the nature of the system’s boundaries. By contrast, being conservative pertains mainly to the quantity itself. A conservative quantity can change in the intended system, but only via flows with the outside, and the change corresponds precisely to the flow. A system’s energy is not necessarily conserved; it can decrease if it is released outside or increase if some energy comes from outside. The same is not exactly valid for the number of water molecules because they can disappear in chemical reactions. Instead, chemists consider that the number of atoms, here hydrogen and oxygen, is conservative.

In this context, what is entropy? The classical thermodynamic perspective defines entropy as a quantity describing the state of a system together with other quantities like energy, volume, …Physicists used to think of heat as the exchange of an abstract fluid, the “caloric”; however, the possibility of a complete transformation of work into heat and the partial conversion of heat into work is not amenable to such a definition. Nevertheless, the notion of fluid remains partially relevant to understand what entropy abstractly is. Entropy is proportional to the size of a system, like mass or energy. Entropy can be exchanged, and in special conditions called reversible, entropy is conservative, like energy.

However, the difference between entropy and energy is that entropy tends to increase towards a maximum in an isolated system, following the second principle of thermodynamics. This statement has two implications: i) entropy is not conservative in general, and ii) the non-conservative changes of entropy are only increases. In reversible situations, entropy is conservative. By contrast, irreversibility leads to the concept of entropy production: a net increase of entropy that does not stem from flows with the surroundings.

Here again, being conservative is not the same as being conserved, and entropy production is the departure from entropy being conservative. Nicolis and Prigogine (1977) showed that a system such as a flame can produce entropy continuously and still be stationary if the resulting entropy flows to the surroundings. Here, the entropy of the system is conserved, but it is not conservative. Similarly, the entropy of a system can decrease when work is used to this end. For example, centrifugation separates compounds of a gas or a liquid.

The second principle of thermodynamic also captures the idea that heat can only go from warm bodies to cold bodies. The entropy change due to a heat exchange Q is dS=QT, where S is the entropy, and T is the temperature. Then, if we have a isolated system with two bodies at temperature Th>Tc, exchanging heat, then dS=QchTh+QhcTc. We assume that the objects only exchange heat between each other so that Qch=Qhc. The only way for dS to be positive is if Qhc is positive; that is, the energy is going from the warm body to the cold body.

In classical thermodynamics, the central concept is thermodynamic equilibrium. At equilibrium, there are no macroscopic net fluxes within the system and with the system surroundings. For example, if we consider an open room, thermodynamic equilibrium is met when temperature, pressure, and other variables are homogeneous and the same as the surroundings. There are always exchanges of gas with the surroundings, but on average, there are no fluxes. By contrast, Nicolis and Prigogine (1977) describe stationary configuration far from thermodynamic equilibrium where there is a net flow of entropy from the system to the surroundings.

Thermodynamic equilibrium is typically the optimum of a function called a state function. These functions are the combination of state variables appropriate for a given coupling with the system’s surroundings. For example, entropy is maximal for an isolated system at thermodynamic equilibrium. Another example, Helmholtz free energy F, describes the usable work that can be obtained from a system at constant temperature and volume. Let us discuss its form, F=UTS, where U is the internal energy, T the temperature, and S the entropy. TS corresponds typically to the energy in the thermic form so that F is the energy minus the internal energy in thermic form. Spontaneously, Helmholtz’s free energy will tend to a minimum. This property is used in engineering to design processes leading to the desired outcome.

Helmholtz free energy is not the most commonly used function. Consider a battery in ordinary conditions; its purpose is to provide electrical work to a circuit, a smartphone, say. Part of the battery’s work is its dilation, which will push air around it. However, this is not genuinely useful. This kind of situation leads to the definition of Gibbs energy, the maximum amount of non-expansion work that can be obtained when temperature and pressure are set by the surroundings, G=F+pV, where p is pressure and V is volume.

In these examples, couplings with surroundings are a manifestation of technological purposes. Sometimes, the concept of exergy is used to describe available energy in general. Unlike Helmholtz and Gibbs free energy, exergy is not a state function because it depends on the quantities describing the system’s surroundings, such as external temperature. In other words, calculus on state function like free energies only depends on initial and final conditions. By contrast, work, heat, or exergy balance depend on the transformation path, not just initial and final states. It follows that exergy depends on circumstances and cannot be aggregated in general. Practically, this means that the available energy of a nuclear power plant with a given amount of nuclear fuel is not just a property of this power plant or fuel; it depends on external temperature (precisely, water input temperature).

Classical mechanics is deterministic and provides the complete trajectories of the objects described. By contrast, thermodynamics only determines the final state of a system by minimizing the appropriate function. Since this state is singularized mathematically as an extremum, theoreticians can predict it. The epistemological efficacy of this theory lies precisely in the ability to determine final states. A system can go from the initial situation to the final situation by many paths, but the outcome is the same. Calculations are performed on well-defined, theoretical paths, whereas the actual paths may involve phenomena such as explosions where variables like entropy are not well-defined (they are defined again at equilibrium).

Classical thermodynamics is about final states at thermodynamic equilibrium. There is no general theory for far from thermodynamic equilibrium conditions. The study of these situations may or may not use thermodynamic concepts. For example, biological evolution or linguistic phenomena all happen far from thermodynamic equilibrium, but their concepts are not thermodynamic. By contrast, non-equilibrium thermodynamics, such as the work of Nicolis and Prigogine (1977), is a direct extension of equilibrium thermodynamics. Unlike classical thermodynamics, these approaches need to introduce an accurate description of the dynamics. A standard method assumes that small parts of the system are at or close to thermodynamic equilibrium but that globally the system is far from it.

To sum this discussion up, entropy is abstractly similar to fluids to an extent. This analogy’s shortcoming is that entropy is not conservative and spontaneously tends to a maximum in an isolated system. We do not genuinely consume energy; we are producing entropy. However, this does not lead to a straightforward accounting of entropy production on Earth. Earth is far from equilibrium, and its entropy is not well defined. Locally, exergy (usable energy) is not a state function, and we cannot aggregate exergy between systems with a different nature. Nevertheless, in comparing physically similar, local processes, entropy production, and exergy are relevant and necessary concepts.

In this context, it is interesting to note that an increase in temperature leads to an increase in entropy. As such, if Earth’s entropy were defined, global warming would increase it. At the same time, Earth is exposed to the cold of space vacuum and loses heat this way. The greenhouse effect slows down this process and slows down the corresponding entropy production (released in open space). Accordingly, if we had a machine using the heat of the Earth’s surface as a warm source and the open space as a cold source, global warming would lead to more usable energy. Of course, this principled analysis has no practical counterpart. With this last example, we aim to emphasize again that the assessment of entropy and entropy production should be performed in the context of technological or biological processes.

2.1.2 Microscopic interpretations of entropy

The thermodynamic perspective described above is somewhat abstract; however, it has two microscopic interpretations introduced by Boltzmann and Gibbs. Debates on which of this interpretation is more fundamental are still ongoing, and their prevalence also has geographical differences (Goldstein et al2020Buonsante et al2016). Despite their conceptual differences, for large isolated systems, they lead to identical mathematical conclusions. Moreover, both are bridges between microscopic and macroscopic descriptions. Here, we assume that the microscopic description is classical, deterministic dynamics, and we do not discuss the quantum case.

Let us start with Boltzmann’s interpretation of entropy. We consider gas in an insulated container so that its energy is constant. At the microscopic level, molecules move and bump on each other and the container’s walls chaotically. At this level, particles are described by their positions and velocities in three dimensions. These numerous quantities define together the microstate, X, and the microspace, i.e., the mathematical space of possible microstates. Let us insist that the microstate is not small; it describes all particles, numbering typically 1023, thus the whole system. Then, we can define the possible macrostates. For example, we posit that one macrostate corresponds to the molecules’ uniform distribution at a given scale and with a given precision. We can define another macrostate where all the particles are in the container’s corner and one that encompasses all other possibilities. Depending on the microstate X, we will be in one of the three possible macrostates.

Let us follow Boltzmann and call Ω(X) the microspace volume that corresponds to the same macrostate than X. There are two crucial points in Boltzmann’s reasoning on Ω.

First, the microscopic volume of a particular macrostate is overwhelmingly higher than the one of others. This situation is a mathematical property that stems from the huge number of particles involved. As a mathematical illustration, let us throw coins. Heads are 1, and tails are 0. The macroscopic variable is the average of the result after a series of throws, which can go from 0 to 1. The first macrostate (M1) is met when this average is between 0.49 and 0.51. All other possibilities lead to the other macrostate (M2). With four throws we get, for example 00110.5 (M1), 01100.5 (M1), 00100.25 (M2), 11100.75 (M2), 11100.75 (M2) and so on. The macroscopic outcomes are quite random. However, for 10000 throws, with simulations, we get 0.493 (M1), 0.499 (M1), 0.505 (M1), 0.507 (M1), 0.498 (M1) and so on. The system is always in the first macrostate, even though it covers a small part of the possible macroscopic values. This outcome stems from the combinatorics that leads an overwhelming number of possibilities to correspond to a specific macrostate, marginalizing alternatives.

Second, Boltzmann assumes molecular chaos: the system explores the microspace uniformly. It follows that the time spent by the system in a given macrostate is proportional to the microscopic volume of this macrostate.

Since one macroscopic possibility corresponds to an overwhelming part of the microspace, the system will spontaneously go into this domain and remain there except for possible, rare, and short-lived periods called fluctuations. The largest the number of particles, the rarest fluctuations are. In typical situations, the number of particles is not 4 or 10000, but is closer to 1000000000000000000000; therefore, fluctuations do not matter.

The number of microstates Ω(X) tends to a maximum with vanishingly rare fluctuations. This result interprets the second principle of thermodynamics, which states that entropy cannot decrease in an isolated system. For example, why do all air molecules not go to one corner of the room? Because all microscopic situations are equally likely and far more microscopic configurations correspond to a uniform air concentration than any other macrostate, see figure 1.

Figure 1:Illustration of Boltzmann entropy. Here, the microspace is represented schematically in 2 dimensions, and colors represent the corresponding macrostates. The system starts from a microstate associated with macrostate A. It explores microstates uniformly and soon arrives in positions corresponding to the macrostate E because most microstates correspond to E. For a microstate X, the number of configurations leading to the macrostate is Ω(X) (in light blue). Note that in physics, the microspace is not in 2 dimensions but has a huge number of dimensions — it is often the space of positions and momenta of all molecules, which leads to 3+3=6 quantity per particle.

As pointed out by Chibbaro et al (2014), this notion is very intuitive. For example, when playing pool, the initial configuration is improbable, and we spontaneously think that somebody had to order the pool balls for them to be in a triangle shape. After striking them, their configuration becomes more uniform, and we acknowledge that it is the result of multiple random collisions. The same qualitative result will follow if we throw balls randomly on the table. It is the same for velocities. Initially, only the ball struck is moving, and all others are still. After the collision, the kinetic energy is distributed among the balls until friction stops them. Of course, the game’s goal is to go beyond randomness, and players aim for balls to reach specific locations.

The number of possibilities Ω is a multiplicative quantity. For example, if we throw a coin, there are 2 possibilities, but if we throw three coins, there are 2×2×2=8 possibilities. This mathematical situation does not fit with the idea that entropy is proportional to a system’s size, which is part of its classical definition. The logarithm function transforms multiplications into additions, so log(Ω1×Ω2)=log(Ω1)+log(Ω2). Then log(Ω) fits the properties of classical entropy, and we can state with Boltzmann that:

S=kBlog(Ω(X)), where kB is a constant 

Of course, there are many refinements of this entropy definition. Here, we considered that the total energy is conserved, whereas it is not always the case. Then, the definition of macrostates must include energy.

Gibbs proposed a different conceptual framework to interpret thermodynamic entropy (Goldstein et al2020Sethna2006). Instead of studying the state of a single system, Gibbs study an ensemble of possible systems describing microstates and their probabilities.

In particular, the fundamental postulate of statistical mechanics states that all microstates with the same energy have equal probability in an isolated system. This ensemble is called the microcanonical ensemble — this is Boltzmann’s hypothesis in a different conceptual context.

Then, except for temperature and entropy, the macroscopic quantities are averages of the microscopic quantities computed with the probabilities defining the ensemble. The Gibbs entropy is defined by:

S=kBiρilog(ρi), where ρi is the probability of the microstate i

Despite their formal similarity, Gibbs and Boltzmann’s formulations have a critical difference. In Boltzmann’s formulation, a single microstate has an entropy: a microstate corresponds to a macrostate, this macrostate corresponds to many microstates, and how many define the entropy of the said microstate. By contrast, Gibbs framework is not about individual microstates; it considers all possible microstates simultaneously, and entropy is a property of their probability distribution. For example, when the system is isolated, and its total energy is constant, all microstates with the same energy have equal probability, which maximizes the entropy.

In a nutshell, the entropy being maximal is a property of the state of the system for Boltzmann. By contrast, it is a property of an ensemble of systems for Gibbs, and more specifically, it is a property of the associated probabilities. In mathematically favorable conditions (infinite number of particles), the outcome is the same despite this significant conceptual difference.

Microscopic interpretations of entropy present a hidden challenge. Liouville’s theorem states that the probabilities in an initial volume in the microspace are conserved over the dynamics. It follows that this volume cannot shrink or expand over time. Taken as is, this would mean that the entropy cannot increase over time — an embarrassing result when aiming to interpret the second principle of thermodynamics.

Figure 2:Coarse-graining versus Liouville’s theorem. As in figure 1, space is represented schematically in 2 dimensions. The microspace is coarse-grained by a grid. The systems are initially in a small part of the microspace, which corresponds to four coarse-grained boxes. After some time, the initial volume has deformed without expanding at the fine-grained level in green. However, the coarse-grained volume occupied by the systems has expanded in blue. After more time, the fine-grained volume has become highly convoluted and meets the whole coarse-grained space, in blue. The growth of the coarse-grained volume occupied by the systems is the argument explaining the growth of entropy.

The leading solution to this problem is a procedure called coarse-graining. Let us introduce it by analogy. Does sprayed water occupy a larger volume than when it was in the tank of a spray bottle? Once water is sprayed, a hand moved in the air affected is going to be wet. From the perspective of the hand, water occupies a vast volume of air. Nevertheless, the actual liquid water volume remains the same; water has just been dispersed, not added. This example illustrates two ways to understand the water volume: the fine-grained water volume that remains the same and the volume from the coarse-grained perspective of the hand — this volume has increased. Mathematically, if we partition space into boxes, all these boxes will contain some sprayed water. This procedure is called coarse-graining. The fine-grained water volume remains the same, but the coarse-grained volume has expanded (figure 2). In physics, coarse-graining follows this logic; however, space and volume no longer pertain to the three-dimensional physical space. Instead, these notions refer to the abstract microspace that typically corresponds to all particles’ position and momenta in the system.

Technically, the microstates are not represented individually in entropy calculation because entropy would not change over time due to Liouville’s theorem. Instead, physicists use a coarse-grained representation of the system. The dynamics still preserve the fine-grained volume; however, the latter deforms, gets more and more convoluted over time, and meets more and more coarse-grained volumes (the boxes). As a result, the coarse-grained volume increases, and so does the entropy (figure 2).

Let us make several supplementary remarks.

First, in classical thermodynamics, the second principle is imperative: an isolated system’s entropy cannot decrease. By contrast, in Boltzmann’s formulation, entropy can also decrease albeit overwhelmingly rarely. In Gibbs formulation, the equilibrium probabilities remain as such, so entropy can only increase.

Second, the concept of entropy in physics pertains to physics. The hallmark of this theoretical context is the use of the constant kB. kB is the bridge between temperature, heat, and mathematical entropy since an exchange of heat leads to QT=dS=kbdlogΩ. Specifically, kB has the dimension of energy divided by temperature. Sometimes, a similar mathematical apparatus can be used, for example, to study flocks of birds or schools of fishes (Mora and Bialek2011); however, this use is an analogy and does not convey the same theoretical meaning (Montévil2019c). The absence of kB is evidence of this fact. Along the same line, in physics, the space of possible microscopic configurations inherited from mechanics is position and momenta, and other aspects can be added, such as molecular vibrations or chemical states.

Third, the relationship between a system and its coupling is complex. We have emphasized that exergy, in general, depends on variables describing its outside; therefore, it depends on transformation paths and is not a state function. Even for state functions, macroscopic systems’ descriptions depend on their couplings precisely because the state function that leads to predictions depends on the couplings. Along the same line, with Gibbs’s interpretation, the system’s statistics entirely depend on the couplings; it is impossible to describe the macroscopic system without them. A change of couplings will require a change of statistics. Boltzmann’s interpretation is more complex in that regard. The definition of macroscopic variables and coarse-graining depend on the couplings; however, the microscopic definitions are somewhat independent; for example, they may rely on classical mechanics.

Fourth, in a nutshell, why does an isolated system tend towards maximum entropy? Let us imagine that the system starts in a low entropy configuration. In Boltzmann’s formulation, the system will travel among possible microstates. Since most microstates correspond to a single macrostate, the system will spontaneously reach and stay in this macrostate, the maximum entropy configuration. In Gibbs formulation, the entropy is defined at equilibrium and does not change. The system may fluctuate according to its probability distribution; however, the entropy is about the probability distribution, not about the state. We can still picture a system initially at equilibrium, for example, a gas in a small box, and a change of coupling, for example, its release in a larger box. Then, the initial distribution is not at maximum entropy, and the change of coupling will lead to a change in distribution. Over time, the system spreads towards the equilibrium distribution, with maximum entropy — though Gibbs framework does not describe how.

In both cases, the macroscopic description of the object goes from a particular state towards the most generic configuration, and the increase of entropy erases the macroscopic peculiarities of the initial configuration. It erases the past. The increase of entropy corresponds to the spread among microstates towards more generic microstates. As such, we can interpret it as the dispersion of energy. For example, a warm body in contact with a cold body means that energy is concentrated in the former, while at thermic equilibrium, it is dispersed equally among the two bodies, according to their thermic capacity. Note that the increase of entropy is sometimes compatible with the appearance of macroscopic patterns. They can emerge due to energetic constraints in the formation of crystals such as ice, for example. Nevertheless, to enforce further patterns, work is required. For example, the Earth’s gravity field pulls heavier molecules to the bottom of a room — work is performed by gravitation, which has many implications for Earth atmosphere or toxic gases.

Last, the articulation of the invariant and perspectival properties of entropy is a complex subject. Let us mention an interesting example given by Francis Bailly: when scientists discovered isotopes, seemingly equivalent particles could be distinguished. The macroscopic description changed, and so did the entropy. The decisive point is that previous predictions still hold. For example, if gas is initially in the corner of a room, it will spread in the room. However, we can make new predictions once we know that there are different isotopes. For example, if we see that only a given isotope is in the corner of the room, then we can predict that the corresponding entropy will increase and that the molecules with this isotope will spread in the room. Therefore, there is a level of arbitrariness in the definition of entropy; however, the arbitrary choices lead to consistent outcomes.

Along the same line, Boltzmann’s formulation depends on the definition of macrostates. The latter depends on the coupling between the system and its surroundings. Similarly, Gibbs entropy depends on coarse-graining, which also corresponds to the coupling between a system and its surroundings. In all cases, macroscoping couplings define the macrocopic variables that will determine equilibrium. Thus, entropy ultimately depends on these couplings. As a result, Rovelli (2017) argues that entropy and the corresponding time arrow are perspectival, where the perspectives are not merely subjective but stem from the couplings with surroundings. In the case of technologies, the couplings’ choice depends on the device’s purpose, as discussed above.

2.2 Dispersion and concentration of matter

In this section, we will discuss how entropy underlies the theoretical understanding of mineral resources. This case is relatively simple since it primarily translates into dispersion and concentration of matter. Georgescu-Roegen (1993) struggled with this question and even considered a possible fourth law of thermodynamics to state that perfect recycling would not be possible. The current consensus is that this point is not valid (Ayres1999Young1991). The received view states that the dispersal of matter does not require a supplementary principle and the second principle is sufficient. On other words, the dispersal of matter and energy are commensurable, they are not distinct.

For example, Ayres (1999) argues that a “spaceship” economy is possible in principle. In this mind experiment, free energy comes from outside ad libitum, and the matter is recycled thanks to this energy indefinitely. We mostly agree with this perspective except on a specific point. If the system has to materialize its own boundaries (the shell of the spaceship or, in our primary interest, Earth’s atmosphere), these boundaries will be exposed to the void of space and eroded — a phenomenon producing entropy. For example, the Earth loses parts of its atmosphere continuously. However, this is more a principled issue than a practical one, and it does not depend significantly on human activities.

Ultimately, there is no sharp distinction between energy and matter, as demonstrated by Einstein’s equation E=mc2. For example, protons are what we usually consider as stable matter. Nevertheless, they disintegrate randomly with extremely small probabilities, translating into a very slow rates. This phenomenon is a process of entropy production.

Let us now study a few examples. The aim is not to provide a large scale picture of matter dispersal on earth; instead, it is to discuss how the concept of entropy matters and works in specific situations.

2.2.1 Ore deposits

Despite these controversies, entropy is a critical concept to understand the availability of mineral resources. This section builds mainly on the analysis of ore deposit formation in geochemistry (Heinrich and Candela2014).

Non-radioactive atoms are conserved in chemical changes; therefore, human or biological activities do not alter their quantity on Earth.3 Here, the problem of resources is similar to energy: what matters is not the quantity of the intended atoms existing on Earth. It is primarily their configurations.

When analyzing ore deposits, the critical factor is the concentration of the intended ores. The higher the concentration of an ore deposit is, the less chemical and mechanical work is required to purify it to functional levels, and, accordingly, the higher its profitability is. If the local concentration of ores in the Earth’s crust was equal to its average everywhere, even the most common resources could not be extracted fruitfully. Then, it is the departure from maximum entropy situations, as far as the concentrations of ore are concerned, that is the crucial factor in analyzing mineral resources.

What is the origin of the heterogeneities that leads to usable ore deposits? If we consider lava of the Earth’s average composition in an insulated box, such deposits would not appear spontaneously because of the second principle of thermodynamics. However, the Earth is not in thermodynamic equilibrium. The nuclear fission of some of its components warms its insides up — a transitory but prolonged process. Moreover, it is an open system. The Sun provides energy on its surface. The space vacuum acts as a cold source where energy is lost, mainly in the radiative form. Between cold sources and warm sources, macroscopic motions occur spontaneously, leading to convection cells. They happen in the mantle, the oceans, and the atmosphere. Convection is just an example of a macroscopic phenomenon that occurs spontaneously in open systems far from thermodynamic equilibrium, and specifically on Earth — Prigogne’s work mentioned above aims precisely to analyze this kind of situation. Another example is the cycle of water, which involves state changes, becoming alternatively liquid, gas and sometimes solid.

These various macroscopic phenomena can lead to the magnification of ore concentration, often due to a contingent combination of processes. For example, heavy compounds tend to sink to the core of the Earth; however, melted magma rises due to convection in the mantle. In magma chambers, gravitation leads heavier elements to sink and thus to the appearance of heterogeneities. Later, the resulting rocks can be submerged or exposed to rainwater, and some compounds will dissolve. If the elements of interest dissolve, they may precipitate at a specific location where appropriate physicochemical conditions are met, leading to an increased concentration. Alternatively, some elements, for example, gold, may not dissolve in most conditions, but other compounds surrounding it may dissolve and be washed away, exposing gold and increasing its local concentration. Then, gold nuggets can be transported by water and concentrated further in specific places in streams — a key and iconic factor of the American gold rush. In general, ore deposits result from such combinations of processes (Heinrich and Candela2014D.Scott et al2014).

In a nutshell, ore deposits result from macroscopic phenomena that occur on Earth because it is far from thermodynamic equilibrium. We did not develop this case, but biotic activities contribute also to this process. In any cases, human activities benefit from this naturally occurring process and pursue it further by several technical or industrial methods that produce very high concentrations in the intended element. All these processes reduce the local entropy, but they require macroscopic work and produce entropy, which is released on the surroundings — at the level of Earth as a whole, entropy is released by thermic radiations.

2.2.2 Wear and entropy

In the use of artifacts, wear can lead to the dispersion of the compounds of the objects used. For example, the emission of fine particles from vehicles stems as much from the wear of tires and breaks as from the combustion in engines (Rogge et al1993).

The wear of mechanical components stems from the transformation of part of the mechanical work into heat, leading to entropy production. Part of this entropy is released on the surroundings as heat. Another part increases the entropy of the component. Entropy production at the level of a machine’s elements is a general framework to understand the wear caused by their use (Bryant et al2008Amiri and Khonsari2010). Similar phenomena occur in electronics and microelectronic. Electric currents increase the probability that atoms move in the components, leading to higher entropy than in the designed configuration, and ultimately to component failure (Basaran et al2003). A similar phenomenon also occurs in batteries and explains their “aging” (Maher and Yazami2014).

Another compelling case is the appearance of microplastics at increasingly high levels in seawater. These microplastics’ origin seems to be in the washing machine’s water when cleaning synthetic textiles (Browne et al2011). The resulting concentration in the environment is sufficient to threaten wildlife (do Sul and Costa2014).

All these examples show that artifacts are altered over time through wear. Moreover, this alteration can result in particles that are dispersed in the surroundings and threaten human and wildlife health. All these phenomena are entropy increases.

2.2.3 Bioaccumulation, bioconcentration, biomagnification

Living beings, especially bacteria, can contribute to the formation of ore deposits by their biochemical activities. However, there is another relevant extension of this discussion in the biological realm. Biotic processes concentrate some compounds found in their milieux. In the Anthropocene, these compounds are also the ones released in the environment by industrial processes and products. The accumulation of such compounds in biological organisms impacts their survival and the safety of their consumption by humans.

Several processes are involved in this phenomenon (Barron2003). The first is the bioaccumulation from sediments. This process is very relevant for heavy compounds that sink to the ocean floor, such as heavy metals or microplastics. It largely depends on the behaviors of the organisms involved. Some of them, like worms, can ingest relatively old sediments, whereas other organisms feed at the surface of sediments.

The second process is the bioconcentration from compounds present in water. Some compounds existing in water have a higher affinity with particular organs or tissues than with water itself. As a result, even assuming that equilibrium between intake and excretion of the said compound is reached, they are in higher concentration in organisms than in water. For example, lipophilic and hydrophobic compounds such as PCBs accumulate in fat tissues.

The bioaccumulation from sediments is made possible by organisms’ feeding activity, a process far from thermodynamic equilibrium. Similarly, bioconcentration from water stems mainly from the fast chemical exchanges taking place during respiration, in gills for large organisms. In both cases, accumulation is made possible by the specific chemical compositions of organisms. The latter are generated and sustained by organisms — a process far from thermodynamic equilibrium. Depending on the cases, the concentration inside the organism can reach a balance between intake and release. On the opposite, organisms can collect compounds in their milieu without reaching the equilibrium concentration.

The last relevant process is biomagnification in food chains. Living beings feed on each other. Bioaccumulation from sediments and bioconcentration lead to the presence of compounds in prey organisms. Then, these compounds become part of a predatory organism’s food and can accumulate further in the latter. This process follows the food chain magnifying the compound’s concentration that gets higher than in sediments and water. The bioaccumulation of heavy metals and PCBs leads to organisms that are improper for consumption.

In these examples, metals and chemicals’ concentration increases dramatically due to biological, far from thermodynamic equilibrium processes. There is a reduction of their spatial distribution entropy. For many compounds of industrial origin, this process is detrimental to the biosphere in general and humankind in particular.

2.2.4 Conclusion on matter dispersal

There are geological processes that occur far from thermodynamic equilibrium. These processes lead to a distribution of compounds far from what we would expect by a straightforward application of the second principle of thermodynamics. Humankind takes advantage of this situation by extracting ores from deposits with sufficient concentrations and concentrating them more on industrial processes. However, processes such as the wear of artifacts also lead to the dispersion of various compounds in the biosphere.

The presence of these compounds at these concentrations is new from an evolutionary perspective, and there is no specific biological process stemming from evolution that mitigates their consequences. Depending on their properties and the physiology of the organisms exposed, they can lead to bioaccumulation, bioconcentration, and biomagnification in the food chain. These processes lead to a high concentration of several compounds at the worse possible locations for biodiversity and humankind: in the body of organisms. In these cases, the decrease of the entropy corresponding to the concentration of these compounds is detrimental.

2.3 Conclusion

In a nutshell, entropy describes the degradation of energy in physics. This degradation means going from unlikely macrostates towards more likely macrostates, that is to say, from specific configurations to more generic ones.

Defining entropy requires the articulation between microstates and macrostates. Theoretical macrostates’ choices depend on their causal role, and the latter depends on the couplings with surroundings. Therefore entropy also depends on the nature of these couplings. Moreover, available energy, exergy, depends not only on the nature of the variables involved in these couplings but also on their values. Nevertheless, some couplings and macroscopic descriptions are generic to a large extent for technological purposes. For example, the mobility of persons and goods leads to analyze macroscopic mechanical couplings.

In engineering, entropy typically comes into play to analyze a machine’s functioning, starting historically with heat engines. However, machines’ long-term functioning also involves entropy to analyze their degradation, and so does their production, as exemplified by our discussion on mineral resources. This remark connects with the concept of autopoiesis in biology: an organism has to maintain or regenerate its parts to last over time. Similarly, artifacts have to be analyzed over their life cycles. In that regard, processes will always produce entropy. The meaning of circular economy, if any, cannot be reversible cycles and perpetual motion. The economy will always lead to entropy production; however, this production can be mitigated by organizing far from equilibrium cycles in the economy, limiting resource dispersal.

The design of machines is also external to the analysis of functioning machines, and the function of machines and artifacts can change depending on the user. These ideas are reminiscent of biological evolution. Taking all these aspects into account leads to a more biological view of technologies, for example considering technics as exosomatic organs (Stiegler and Ross2017Montévil et al2020). Ultimately, available energy depends on a given technological apparatus, with principled limits for broad classes of devices.The problematic increases of entropy are relevant from the perspective of technological, social, and biological organizations.

3 Entropy and organizations

Schrödinger (1944) emphasized that biological situations remain far from thermodynamic equilibrium. There is no contradiction with the second principle of thermodynamics because biological systems are open systems that take low entropy energy from their surroundings and release entropy. We already discussed macroscopic movements of matter on Earth that occur spontaneously far from thermodynamic equilibrium and sometimes lead to ore deposits forming, thus to low entropy configurations.

Schrödinger went further and proposed to analyze biological order as negative entropy. There are little doubts that biological organizations correspond to a low entropy insofar as we can define their entropy. There have been several theoretical works along this line (Nicolis and Prigogine1977van Bertalanffy2001). However, conflating low entropy and the concept of organization is not accurate. Everything that contributes to the low entropy of biological situations is not relevant for their organizations. For example, a cancerous tumor increases morphological complexity but decreases organization (Longo et al2015). Similarly, we have discussed biomagnification and other processes that reduce the entropy of chemicals’ spatial distribution but are detrimental to biological organizations. Moreover, entropy is extensive; it is proportional to the size of a system. By contrast, a biological organization’s critical parts may not amount to much quantitatively, such as a single nucleotide change or a few molecules in a cell, which can both have significant consequences.

This kind of shortcomings led to propose another quantity to address biological organizations: anti-entropy (Bailly and Longo2009Longo and Montévil2014a). Anti-entropy was first a macroscopic extension of far from equilibrium thermodynamics. The term anti-entropy stems from an analogy between the relation matter/anti-matter and entropy/anti-entropy. Entropy and anti-entropy are similar, they have an opposite sign, and at the same time, they have a qualitatively different meaning. They only “merge" when the organism dies or, more generally, when an organization collapses.

To go further, we have to introduce several theoretical concepts designed to understand biological organizations and discuss their connection with entropy. Then, as an important application, we will discuss how the nature of biological organizations leads to two specific vulnerabilities to Anthropocene changes.

3.1 Theoretical background

We first discuss couplings between biological organizations and their surroundings, provided that it is a crucial component in the definition of entropy. Then, we discuss the nature of putative biological microspaces and show that they require introducing the fundamental concept of historicity. Last, we address how organizations maintain themselves far from thermodynamic equilibrium by the interdependencies between their parts. In the whole discussion, historicity is a central feature of biology that has no counterpart in theoretical physics. Together, these elements provide the theoretical background to specify the concept of anti-entropy

3.1.1 Couplings with the surroundings

The couplings between a system and its surroundings are critical to defining entropy and thermodynamic equilibrium, as discussed in section 2.1.2. However, in biology, the couplings between organisms and their milieu are a far more complex theoretical notion.

First, biology requires to historicize the concept of coupling. Couplings change in evolution and development. It is even tempting to consider specific principles about biological couplings (Kirchhoff et al2018). Once living objects are exposed to phenomena that impact their organization, they tend to establish couplings with these phenomena in various ways, a process that we have called enablement (Longo et al2012Longo and Montévil2013). For example, some phenomena can be a source of free energy. It is the case of light, which enabled photosynthetic organisms. Similarly, humans have recently concentrated radioactive compounds for industrial purposes. In Chernobyl, Ukraine, wildlife was exposed to these compounds, and fungi appeared that metabolize their intense radiations (Dadachova et al2007). However, couplings are not limited to significant sources of free energy. For example, many organisms also use light to perceive their environments.

In these examples, the inside and the outside of an object are well-defined. However, the organisms’ surroundings are not just static. Instead, organisms change them actively. With the ability to move, organisms can discover and obtain different surroundings. In the process of niche construction, they actively produce part of their surroundings (Odling-Smee et al2003Pocheville2010Bertolotti and Magnani2017). Beyond the concept of coupling between inside and outside, biology involves couplings between different levels of organization. These couplings stem from a shared history, for example, between a multicellular organism and its cells, and organisms and ecosystems (Soto et al2008Longo and Montévil2014bMiquel and Hwang2016).

In a nutshell, physicists established thermodynamics for systems where the coupling between a system and its surroundings is well defined and is usually static, or, at least, follows a pre-defined pattern. This framework enables engineers to control industrial processes and the resulting artifacts. By contrast, the coupling between living organizations and their surroundings is not well defined by a sharp distinction between the inside and the outside of the organism. It is not a theoretical invariant. Current couplings result from natural history and continue to change, producing history (Miquel and Hwang2016Montévil et al2016). A species’ appearance presents many opportunities for new couplings in ecosystems, such as new possible niches (Longo et al2012Gatti et al2018). We can include social organizations and their production of artifacts in the discussion — artifacts are analyzed as exosomatic organs by Lotka (1945) and Stiegler and Ross (2017). Then, living matter has coupled some of its processes, physicists’ activity, to remarkably weak phenomena at biological scales such as gravitational waves or interactions with neutrinos.

Couplings are far more proteiform in biology than in the standard framework of thermodynamic. In artifacts and industrial processes, let us recall that the thermodynamic couplings correspond to the processes’ purpose to generate usable work. In biology, couplings’ plasticity corresponds to the variability of biological functions that is intrinsic to the historical changes of biological objects.

3.1.2 Microspaces in biology

The situation for candidate microspaces in biology differs from the core hypotheses used to define entropy.

First, in biology, physical space is broken down by membranes at all scales, from organelles and cells to tissues, organs, and organisms. This spatial organization restricts diffusion and the rate of entropy production. In turn, this partial compartmentation ensures that the number of molecules remains low in compartments, such as cells, for many kinds of molecules. Chromosomes, in particular, exist in only a few copies in each cell. We have seen with the example of coin throwing that a macroscopic variable was stable in the case of a high number of throws but highly random for a small number of throws. It is the same for molecular processes in cells : the low number of many molecules leads to randomness (Kupiec1983Kaern et al2005Corre et al2014). This randomness, in turn, implies that the deterministic picture for collections of molecules is not sound for cells (Lestas et al2010).

Second, cellular proteomes’ complexity includes networks of numerous compounds interacting and exhibiting complex dynamics (Kauffman1993Balleza et al2008). To an extent, these dynamics can even “improvise” when, for example, the regulation of a gene’s expression is artificially jammed (David et al2013Braun2015).

Last, the nature of the molecules existing in cells and organisms is not a theoretical invariant. As a result, we have to take into account the changes in the relevant molecules. For example, proteins are chains of amino acids. If we consider only proteins with 200 amino acids, there are 22200 possible molecules. This number is gigantic: if all the particles of the universe (1080) were devoted to exploring this space of possibility by changing at the Planck time scale, they would not manage to explore much of this space in the universe lifetime (Longo et al2012). Unlike Boltzmann, we cannot build on the idea that microscopic possibilities would be explored uniformly, leading towards generic configurations (the most probable macrostate). Instead, we have to focus on how systems explore possibilities in a historical process.

If the difficulty were limited to this aspect, it would not entirely hinder mathematical reasoning from finding generic patterns. For example, mutations without selection (neutral mutations) lead to a random walk in the space of possible dna sequences, and probability distributions describe this process well. Its generic properties are used to assess the genealogical proximity of different species. Similarly, we can analyze the generic properties of large networks of interacting molecules if the interactions are generic, i.e., all have the same nature.

The heart of the theoretical problem is that this process leads to molecules with qualitatively different behaviors. For example, molecular motors or tubulin do very different things than enzymes. Molecular motors are molecules that “crawl” on macromolecular structures, and tubulin are molecules that constitute fibers spontaneously. Moreover, molecules contribute to macroscopic structures and interact with them. In this process, their biological meanings acquire qualitative differences. For example, crystallin proteins contribute to the mechanical integrity of the eye, and they are transparent so that they do not hinder the flow of light.

In the relevant organic and ecosystemic contexts, the specific properties of proteins impact the exploration of dna sequences. As a result, the latter differs from a random walk, and its determinants are multiple. Moreover, historicity is relevant even for the dynamic of neutral mutations, mutations having no functional consequences. Mutations can be reversed or prevented by proteins that appeared historically. Similarly, reproduction processes change in evolution, which influences all genetic dynamics, even for neutral mutations.

We consider how living beings live as the main interest of biology. Therefore, functionally relevant changes are fundamental. In the case of mutations, biologically relevant variations are the one that impacts biological organizations in one way or another. When we discuss the primary structure of proteins (their sequence) or dna sequences, we consider combinations of elementary elements, like a text is a combination of letters and other symbols. If we take this combination process alone, all patterns seem equivalent, which wrongly suggests an analogy with Boltzmann’s hypothesis of molecular chaos. In biology, these combinations are not biologically equivalent. They can lead to qualitative novelties and changes in the exploration of these combinatorial possibilities. In a nutshell, not only is the space of combinatorial possibilities massive, but the "rules" of the exploration of this space depend on positions in this space — and these positions are not the sole determinants. These rules are as diverse as functional biological processes are, and thus they are not generic properties, instead they are historical (Montévil et al2016Montévil2019b).

The epistemological and theoretical consequences of this situation are far-reaching, and there is no consensus on the appropriate methods and concepts to accommodate them (Bich and Bocchi2012Montévil et al2016Longo2018Kauffman2019). We have proposed to invert the epistemic strategy of physics. Physics understands changes by invariance: the equation and their invariants describe states’ changes but do not change themselves. By contrast, in biology, we argue that variations come first and that invariants come second; they are historicized (Longo2018). We call the latter "constraints" (Soto et al2016Montévil2019c). We have argued that, unlike in physics theories, the definition of concrete experiments always has an essentially historical component in biology. In physics, experiments can be performed de novo, whereas biological experiments and their reproducibility rely on objects having a common origin, thus on the ability of organisms and cells to reproduce (Montévil2019a).

In particular, the space of possibilities cannot be pre-stated both at the microscopic and macroscopic levels — assuming that stating possibilities requires describing their causal structure explicitly. For example, the space generated by protein combinatorics is not genuinely a space of possibilities. It does not make explicit that molecules like molecular motors or tubulin are possible. Moreover, this space is far from complete; for example, proteins are not just amino acid sequences, and they can recruit other elements such as iron in hemoglobin or iodine in thyroid hormones. Nevertheless, this space is relevant: it is a space of possible combinations of amino acids. This space is generated mathematically by the transformation defined by mutations and the enzymes involved in transcription and translation (Montévil2019b). However, this theoretical construct is insufficient to state the possible roles of the said combinations in biological organisms. In this regard, possibility spaces in biology are not just a way to accommodate changes; they are a component of biological changes and are co-constructed by them.

3.1.3 Persisting organizations

Several theoretical biologists have developed the idea that the parts of a biological organization maintain each other (Varela et al1974Rosen1991Kauffman1993Letelier et al2003). The aim of this schema is to understand how organizations persist in spite of the spontaneous trend for entropy increase — provided that, unlike flames or hurricanes, biological organizations are not simple self-organization of flows. In particular, Kauffman (2002) articulates constraints and work in the thermodynamic sense. In Kauffman’s schema, work maintains constraints, and constraints canalyze work. This interdependency leads to the persistence of work and constraints as long as the surroundings allow it.

We have developed a general and formalized framework describing the interplay between processes of transformations and constraints. In this framework, a constraint is invariant w.r. to a process, at a given time scale, but it canalyzes this process. A constraint C1 can act on a process that maintains another constraint C2. Then, we say that C2 depends on C1. We hypothesized that relations of dependence in organizations lead to cycles. For example, C1 depends on C2, C2 depends on C3, and C3 depends on C1 (Montévil and Mossio2015Mossio et al2016). We call this kind of circularity closure of constraints.

Closure of constraints is very different from being closed in the thermodynamic sense. Organizations depend on flows from the surroundings at the level of processes to remain far from thermodynamic equilibrium. For example, mammals depend on food and oxygen flows. They also depend on external constraints that are necessary to sustain internal constraints but are not maintained by the closure. For example, many organizations depend on gravitation or the physical periodicity of night/day cycles.

Constraints are not necessarily macroscopic (and thus thermodynamic). Constraints are patterns structuring processes of transformation; they can exist at all space and time scales. For example, dna sequences are constraints on gene expression. Dna 3D configurations influence the accessibility of genes and are also constraints on gene expression. At a larger scale, the vascular system’s geometry is a constraint on blood flow in tetrapods.

In this framework, biological entities maintain their configuration far from thermodynamic equilibrium in a distinct way. Let us recall that, in physics, a configuration far from thermodynamic equilibrium can appear and persist by the self-organization of flows stemming from their surroundings, like in flames or hurricanes. Biological organizations last for different reasons. In the framework of the closure of constraints, organizations persist thanks to the circular maintenance of constraints. They are not the result of spontaneous self-organization of flows (Longo et al2015).

Organizations are not spontaneous in the sense that they stem from history. Self-organization in physics is generic; for example, convection cells always follow the same pattern. By contrast, closure of constraints is compatible with many qualitatively different configurations. For example, different bacteria can live in the same milieu. Reciprocally, in the historicized epistemological framework that we have hinted to, invariants (constraints) cannot be postulated like in physics; they require an explanation. Closure of constraints is a way to explain the relative persistence of some constraints (Montévil et al2016Mossio et al2016Montévil2019c). Natural selection is another complementary way to explain it.

Closure of constraints describes constraints collectively stabilizing each other. It does not follow, however, that the constraints of an organization remain static. On the opposite, there are limits to the stability of biological organizations. For example, intrinsic variations follow from the small number of most molecules in cells (Lestas et al2010). As a further illustration, let us consider a gene coding for a fluorescent protein, but with a mutation preventing the formation of the said protein if the code is considered exact. However, protein production is not exact. Randomness in gene expression generates a diversity of variants, including the fluorescent protein, and bacteria presenting the mutated gene will be fluorescent (Meyerovich et al2010).

Actual biological organizations result from the iterative integration of functional novelties. Novelties are random because they cannot be predicted before their appearance; moreover, they are not generic outcomes. As discussed above, they provide a specific contribution to organizations. Specificity stems both from the structure of constraints and their articulation to an organization. As a result, the theoretical definition of organisms integrates relational and historical approaches, which requires a proper theorization (Montévil and Mossio2020).

3.1.4 Conclusion

What could then be a theoretical specification of anti-entropy? First, when entropy is low, supplementary macroscopic variables are necessary to specify the system. For example, if gas is mostly in the corner of a room, it is necessary to specify which corner, its size, the difference of concentration between this corner and the rest of the room, etc. Biological situations involve this kind of supplementary quantities to describe their properties, physiology, and life cycles, so organizations are often confused with low entropy.

To overcome this confusion, we propose to build anti-entropy on the concept of organization as closure of constraints. Then, it is not only and not all macroscopic variables that play a role in anti-entropy, as discussed above, but constraints of all sizes. The core reason for this property is that small features of an organism can have large-scale consequences.

Moreover, anti-entropy aims to capture the singularity of a biological situation in the process of individuation at all levels (evolution, ecosystems, organisms). Therefore, the specificity of constraints — how improbable they are when we can define probabilities — should play a central role. This specificity can then be assessed for the organization, in other words, how specific constraints have to be to play their role in the organization. Here, we are introducing the notion that coarse-graining, in biology, stems from organizations.

In a nutshell, we propose to consider that an element relevant for anti-entropy satisfies three criteria. i) It contributes to organization sensu closure of constraints; informally, it has a systemic role in an organism’s persistence. ii) It is the specific result of history. iii) The specific properties in (ii) are the condition for the systemic role in (i).

It follows from this definition that anti-entropy is relative to an organization. A change that increases an organization’s anti-entropy can reduce another’s anti-entropy and even lead to its complete collapse.

There are two ways in which anti-entropy can be non-conservative. First, it can decrease. The organization simplifies; it involves fewer constraints and more generic constraints, the ultimate example being death. This process involves entropy production since it erases parts of the organization that stems from the object’s history. Second, by analogy with entropy production, we propose the concept of anti-entropy production. It corresponds to the appearance of functional novelties, as described above. This process is time-oriented, like entropy production.

There are processes in biology that are analyzed as physical self-organization, such as convection cells or Turing’s morphogenesis (Turing1952). According to our definition, they do not contribute per se to anti-entropy: they are generic. However, their conditions of possibility and their role in other processes, such as cellular differentiation, can be relevant for anti-entropy. In the latter case, they are enabling constraints for the growth of anti-entropy (Montévil20202019b). Here, we are following a line of reasoning similar to van Bertalanffy (2001). He distinguishes mechanized processes that lead consistently to a given result at the level of the parts and non-mechanized processes involving the organism as a whole.

Last, anti-entropy production requires producing a specific situation conveying a specific biological meaning, such as the specific role of a new constraint in an organization. Such situations are not generic outcomes; therefore, they require a work of exploration. This exploration may involve both the new parts and broader organization changes. Moreover, it can involve the level of the individual, a group, a population, or an ecosystem.

In humans, this exploration takes specific forms since it can be performed by intellectual work to an extent, using tools such as pen and paper or computers. For example, a new building can be sketched both on paper or a computer software, leading to a pleasing and functional shape. Moreover, calculations should be performed to ensure that the building will not collapse, including during its construction. The exploration does not stop here, artistic models and simulations can help to assess how well it embeds in the context, especially when the future users, inhabitants and neighbors can criticize the project. Of course, this is but a sample, of human processes leading to the emergence of specific novelties (Stiegler et al2020).

3.2 Disruptions as entropizations of anti-entropy

We will now discuss how this framework can contribute to understanding the Anthropocene crisis. Let us start with an example.

Seasonal variations constrain living beings and their activities. Biological responses specific to this rhythm appeared in evolution. The internalization of seasonal rhythms is an example of the trend to establish complex couplings that living beings exhibit, as discussed above. Many biological events such as blooms, hatching, and migrations occur at specific times of the year. The study of periodic events in the living world associated with seasonality is phenology.

In ecology, the “desynchronizations” of activities can break down relations between populations in an ecosystem. These alterations and their consequences are often called disruptions, and their study is a particularly active field of research. They are relevant economically, socially, and for conservation biology (Morellato et al2016Stevenson et al2015).

In this section, we argue that understanding these disruptions supposes simultaneously to analyze i) the relations in a system and ii) the natural history which originates a specific synchronization iii) that contributes to the populations’ viability. In other words, we think that disruptions decrease anti-entropy.

Let us describe the typical situation in more detail. If all populations would follow the same shift, then there would be no change in their interactions. However, species use a diversity of clues to articulate their behavior with seasons (called Zeitgeber, e.g., temperature, snow, soil temperature, and photoperiod Visser et al2010). The impact of climate change on phenologies is diverse because, for example, climate change does not impact photoperiods but does impact temperatures. The diversity in phenological changes impacts the possible interactions and can destabilize ecosystems.

For example, Memmott et al (2007) modeled the disruption of plant-pollinator interactions in an ecosystem. In this model, the notion of disruption has a precise meaning, which the authors do not discuss. Let us describe their model. Each plant has a flowering period, and each pollinator has a period of activity. Plant-pollinator interactions stem from empirical data. A plants that are not pollinated are impacted negatively and so are pollinators with periods without plants to forage on.

Phenological  differences  between  plants  and  pollinators
Figure 3:Phenological differences between plants and pollinators after a change of climate (adapted from Memmott et al2007). Left, the situation before the change. The pollinator is viable because there are plants that flower during all its activity period. Right, situation after climate change. The activity periods changed somewhat randomly. The pollinator has two parts of its activity period without a plant to pollinate, which leads to its disappearance in the model.

This computational model’s outcome is that few plants are vulnerable to the change, but many pollinators are. Plants are relatively robust because pollination can happen at any time during their flowering period. However, pollinators are vulnerable because they need to feed during their whole activity period, see figure 3.

What happens in this model at a deeper theoretical level? The initial situation is in a small part of the space of possible activity periods because all plants and pollinators are in a viable configuration. The underlying history of these ecosystems explains that these particular configurations exist. The condition of viability for plants and pollinators leads to a systemic analysis of their networks of interactions at a given time. After a change in the local climate and the subsequent, diverse phenological shifts, a significant number of pollinators and some plants are no longer in a viable configuration. Here, the specific initial situation transforms into a more random or "arbitrary" configuration concerning the viability and Natural History. In this model, disruption is the dissipation of history outcomes that impact the sustainability of systems parts via the ecosystem’s interdependencies.

The initial situation contributes to anti-entropy. The populations of the system contribute to their viability by plant-pollinator interactions (i). The initial configuration is specific because it is in a small part of the possibility space (ii). Last, this specific configuration has an organizational meaning: in our example, all populations are viable because of this specific situation (iii). The initial configuration meets our three criteria; therefore, the initial configuration’s specificity is part of the ecosystem’s anti-entropy.

The final configuration is more generic than the initial one; it is more random concerning viability criteria. Climate change leads to the loss of part of the anti-entropy. This loss corresponds to a randomization of the configuration in the space of activity periods, that is, an increase of entropy in this space. Moreover, this change leads to the disappearance of populations, which means that part of the relevant variables disappears. Part of the biological possibilities collapse.

There are many other situations where similar reasonings enable scientists to analyze disruptions of synchronicities, even though our theoretical interpretation is not explicitly used (for example, Robbirt et al2014Rafferty et al2015Memmott et al2007). Moreover, the discussion of anti-entropy and its decrease in disruption is more general than the case of seasonal synchronicities. Climate change and other changes of the Anthropocene disperse part of the anti-entropy and produce entropy at the level of the relevant description space, that is, activity periods — the latter is not the space of physics, position and momenta, and the corresponding entropy is not physics entropy. The configuration after the change occupies a larger part of the remaining description space than initially, and these configurations do not fit with the organization of the system (in our example, not all populations are viable).

This discussion shows that biological organizations have particular vulnerabilities. They build on regularities, in particular, the ones in their surroundings. However, these regularities can change, and, in the Anthropocene, they change very quickly due to human activities. Unlike in cybernetics, no feedback stabilizes these couplings, at least not on relatively short time scales. When the surroundings change, fine-tuned organizations become randomized and thus disorganized to an extent. A similar phenomenon occurs, for example, in the case of endocrine disruptors. Chemical industries produce new chemicals, some of which interfere with hormone action. Since these chemicals and families of chemicals are new occurrences in the biosphere, there is no organized response to them. Hormone actions are the fine-tuned result of evolution, and endocrine disruptors randomize it. Endocrine disruptors lead to many adverse effects, both for humans and wildlife (Zoeller et al2012).

We thus have a first organizational concept for the Anthropocene crisis: a partial loss of anti-entropy that corresponds to an increase of biological entropy. Here, entropy is not directly the concept of physics (i.e., with kb): the growth of entropy occurs for biological quantities relevant for biological organizations, for example activity periods. The loss of anti-entropy is the loss of specific history results that used to contribute to the current organization of organisms or ecosystems. This process leads to their disorganization.

3.3 The disruption of anti-entropy production

Disruptions do not only impact the result of history; they also affect the ability to generate novelties by producing functional novelties. In other words, they also impact anti-entropy production. To introduce this idea, let us start with examples from human activities, we will also provide an example in biology in the conclusion of this part.

3.3.1 Lost in translation

Automatic translations provide a simple, compelling example. Let us compare part of a Bourguignon beef recipe with the text after a translation in Japanese and back in English by Google Translate.

Original text

Text after translation

1) In a small bowl, combine the butter and flour. Set aside.

1) In a small bowl, mix the butter and flour. Save it.

2) In a large ovenproof pan, brown the meat, half at a time, in the oil. Season with salt and pepper. Reserve aside on a plate.

2) In a large oven-proof pan, oil the meat in half. Adjust the taste with salt and pepper. Place it on a plate.

3) In the same pan, brown the onion. Add oil, if needed. Add the garlic and cook for 1 minute. Deglaze with the wine and simmer for about 5 minutes. Add the broth and kneaded butter and bring to a boil, stirring constantly with a whisk. Add the meat, the shallot studded with the clove, and the bay leaf. Season with salt and pepper.

3) Burn the onions in the same pot. Add oil as needed. Add garlic and simmer for 1 minute. Remove the glaze with wine and simmer for about 5 minutes. Add the soup and kneaded butter and bring to a boil with constant stirring in a whisk. Add meat, clove-studded shallot and bay leaves. Adjust the taste with salt and pepper.

The outcome is sometimes accurate, sometimes involves a loss of accuracy, and is occasionally meaningless or wrong. It is worth noting that technical terms such as “season” or “brown” are replaced respectively with a circumlocution, "adjust the taste," or wrong translation, "burn." In one case, a term is replaced by a wrong, more specific one: “cook” becomes “simmer”.

What happened in this process? Google translate uses a Neural Machine Translation System that builds on preexisting translations to find statistical patterns (Wu et al2016). However, these statistical patterns do not preserve meaning in all cases. For example, one word may have two primary meanings in one language and only one in another — it may even not have a good counterpart. Since Google Translate builds on databases, the outcome quality depends strongly on whether the use of the word in the lexical context of its sentence preexists in Google’s corpus.

A good translator does not just rely on usual ways to translate words and sentences but strives to convey meaning in another language. In a recipe, conveying meaning is a practical notion: enabling the reader to perform the recipe. Of course, the stakes of a text are always more complex, but in this case it remains a primary function. And performing this function is not simple. Since cooking methods and ingredients are specific to a locality, translating a recipe should not be literal; the translated text has to find its home in a different gastronomic culture.

There are many ways to convey meaning in translation. For example, the translator may choose not to translate a word but to define it instead. The recipe used as an example is a "human" translation from the French. However, in French, “Reserve aside on a plate” would be redundant because “réserver” means to keep aside for later use and is a widely known word; this is an implicit definition. Similarly, translating ingredients is a complex operation because it involves substitutions for available ingredients in the target country. Ultimately, sometimes, the only way to translate a recipe correctly involves tests to reproduce it in a given locality. The meaning of recipes stems from the coupling between a food production and distribution infrastructure and culinary culture.

To convey a text’s meaning, good translators often need to depart from the text and a fortiori from its statistical translation. The statistical translations are the ones that maximize entropy, at least in a conceptual sense (sometimes in the technical sense of information theory), because they are the most probable output once we have a database of known translations. In other words, the automatic translations are the ones that fit the most closely to preexisting patterns. By contrast, departing from the most probable translations by a good translator involves choosing an unlikely translation to convey meaning properly. Sometimes, the translator may even choose not to translate a word, and this kind of choice can ultimately lead to enriching the target language with a new word. They are part of the overall diachrony of language.

This work of the translator fits our concept of novelty (Montévil2019b), thus corresponding to the concept of anti-entropy production transposed at the linguistic and gastronomic interface in our specific example. Let us recall that we assumed that the aim is to enable the reader to perform the recipe and thus that cooking tests are part of the translator tools: translation is never just a linguistic problem. Or provide another example, in poetry, a field where performing translations is especially difficult, the musicality of the translation is often a central factor.

In a nutshell, the preservation of meaning in translation often requires introducing novelties in the translation, akin to the production of biological anti-entropy. Like biological novelties, they are unlikely and, at the same time, convey a specific meaning in the intended context. By contrast, the use of automatic statistical translations leads to a more or less significant loss of meaning because of its inability to introduce such novelties. In this perspective, translators do not optimize the transmission of information sensu Shannon (1948); instead, they add information to preserve the initial meaning.

Let us go back to the term replaced by a more specific one, “cook” becoming “simmer”. In this specific case, even though it is not clear why this substitution took place — Japanese cuisine does not just simmer garlic —, the situation fits the more general where deep learning seems to introduce novelty. Another example is image upscaling with deep learning: details are added to a photograph, such as blades of grass. In both cases, the increase in details has not the same meaning as the original text or image. In the recipe, the addition is wrong; in the photograph, the blades of grass are recombinations from a database, not plants from the original scene. This fact should lead to the greatest caution when such methods enhance scientific images used to interpret a phenomenon. In a nutshell, these kinds of addition are not a full-fledged novelty in the sense that a translator meaningful novelties.

3.3.2 Developing children

Another interesting example is the interaction between infants and digital media. This interaction does not provide benefits and can be detrimental to children (Brown and et al2011). Let us quote part of the explanation given by Marcelli et al (2018).

The sequences presented to toddlers on screens have a double effect: the "show" in perpetual motion captures their eyes, but this capture takes place without any interactive synchrony with what these toddlers can feel, understand, live, experience, etc.

They are passive and submissive spectators who go through the scenario and hear a "mechanical" voice, which, most often, makes them silent. Because there is no prosodic synchronization possible, the toddler remains silent ...

[...] this flow of stimulation leaves the toddler in front of an attractive enigma but one that is difficult to understand.

(Marcelli et al 2018, we translate).

In a nutshell, young children are not able to follow a proto-narrative by themselves. Parents “cheat” and adjust their proto-narrative to their children’s behaviors in order for a proto-narrative to make sense for the child. In other words, the parents constitute meaning artificially by improvisations based on the infant. This meaning-generating activity does not exist with digital media, where the unfolding of the scenario is generic.

Adults generating novelties is required for the interaction to make sense for the child. Let us emphasize that novelties, in our overall framework, are not just new patterns; they are functional in a given situation — here, they generate a sufficiently coherent proto-narrative for the child. This role of novelties is in contrast with the case of digital media. Digital media capture babies or infant focus, but without the emergence of a proto-narrative.

3.3.3 Second order disruptions

In both the case of translators and parents, we see that generating novelties is critical to convey or generate meaning. Novelties contribute to a specific meaning and are not, at the same time, the generic result of the initial situation. They can be improbable but may also not even be possible in a positive sense. For example, in translations, words outside of the dictionary can be used, such as untranslated words or neologisms. In the use of current algorithms, the ability to generate such novelties disappears.

Are there similar phenomena in strictly biological situations? Templeton et al (2001) raise the issue of the disruption of evolution, and more specifically, disruptions of the process of adaptation by natural selection. If a population is fragmented, the gene flow between the different fragments stop, and the evolutionary processes will take place in each fragment independently. The population relevant to the evolutionary analysis shrinks from the initial population to the population of each fragment. Then the nature of the evolutionary dynamics changes. It becomes dominated by genetic drift, and each subpopulation’s genetic diversity will decrease. The process of natural selection will not have enough diversity for differential reproduction to constrain adaptations. Empirical results support this analysis (Williams et al2003).

In the previous subsection, the result of history is the object of the disruption. Here, by contrast, disruption is the loss or impairment of the ability to generate history by functional novelties. Therefore, we call these situations second-order disruptions. They are the disruption of the ability to produce anti-entropy.

4 Conclusion

Entropy is a well-established concept in equilibrium thermodynamics. The notion of “consuming energy” and “consuming mineral resources” are not accurate from the perspective of physics and the concept of entropy and its derivatives are necessary to address these phenomena. The core of this conceptual point is that, in both cases, configurations matter more than sheer quantities. The concept of entropy leads to consider usable energy. However, the latter depends on the couplings of a system with its surroundings, and these couplings can be diverse. It is the case even when studying the life cycle of a given artifact, that is, beyond analyzing one of the multiple processes of this life cycle (resources acquisition, production, use, wear, disposal). As a result, it would make little sense to perform a straightforward accounting of free energy and, therefore, of physics entropy.

The concept of entropy requires rigorous reasoning, and non-equilibrium thermodynamics and theoretical biology are far from being as theoretically stable as equilibrium thermodynamics. Nevertheless, there are definite conclusions.

For example, entropy helps understanding mineral resources. Earth is an open system, where geological processes contingently magnify the concentration of elements leading to ore deposit formation. Once purified and used to construct artifacts, resources tend to disperse back into the environment. For example, for tires and breaks, wear leads to the dispersal of the components matter. Organisms may concentrate back the particles dispersed by industrial processes again, with adverse consequences for both humankind and wildlife. Processes leading to the increase in the concentration of elements are associated with a cost in free energy in one form or another; again, they can happen spontaneously because Earth and the biosphere are far from thermodynamic equilibrium and, a fortiori, are open to fluxes of energy.

Equlibrium analyses are limited to a machine’s functioning or a given step in its life cycle. By contrast, the life cycle of a machine is far from thermodynamic equilibrium because the production and destruction of the machine are irreversible processes. Moreover, what genuinely matters is articulating artifacts with biological, technological, and social organizations. This point is relevant both in terms of interactions and to transfer some questions from biology to technics and technologies. For example, artifacts also have functions and emerge in a historical process, albeit different from biological evolution.

In biology, we have emphasized the centrality of organizations and their historical dimension. These aspects lead to the concepts of anti-entropy and anti-entropy production. Anti-entropy corresponds to relevant, specific parts of an organization that are the result of history and perform a role in organizations because of that. Anti-entropy production is the appearance of a novelty in a strong sense: an initially improbable or even unprestatable outcome that provides a specific contribution to the organization. It follows from these definitions that both concepts are relative to a given organization.

These two concepts lead to two kinds of disruption of biological and human organizations. In the disruption of anti-entropy, changes lead to the loss of specific configurations contributing to an organization. In other words, part of anti-entropy is lost in favor of more random configurations w.r. to the biological organization. This phenomenon is the entropization of part of anti-entropy.

Second-order disruptions, the disruptions of anti-entropy production, are the loss of the ability to generate novelties contributing to biological organizations. In the technological examples discussed, the ability to produce specific texts or interactions conveying meaning is disrupted by digital technologies. Similarly, biological evolution is itself the object of disruptions in the Anthropocene.

Overall, this investigation shows that the concept of entropy is critical to understand the Anthropocene; however, its specific role ultimately depends on the analysis of relevant physical processes and biological or social organizations. The theory of biological organization, in particular, remains a work in progress.

Acknowledgments

This work has received funding from the MSCA-RISE programme under grant agreement No 777707 and the Cogito Foundation, grant 19-111-R. We thank Giuseppe Longo, Jean-Claude Englebert, Alejandro Merlo Ote and the IRI Team for comments on previous versions of this manuscript.

References

  1. Amiri, M. and Khonsari, M.M. (2010). On the thermodynamics of friction and wear—a review. Entropy, 12, 5:pp. 1021–1049. ISSN 1099-4300. https://www.mdpi.com/1099-4300/12/5/1021. doi: 10.3390/e12051021.
  2. Ayres, R.U. (1999). The second law, the fourth law, recycling and limits to growth. Ecological Economics, 29, 3:pp. 473 – 483. ISSN 0921-8009. http://www.sciencedirect.com/science/article/pii/S0921800998000986. doi: 10.1016/S0921-8009(98)00098-6.
  3. Bailly, F. and Longo, G. (2009). Biological organization and anti-entropy. Journal of Biological Systems, 17, 1:pp. 63–96. doi: 10.1142/S0218339009002715.
  4. Balleza, E., Alvarez-Buylla, E.R., Chaos, A., Kauffman, S., Shmulevich, I., and Aldana, M. (06 2008). Critical dynamics in genetic regulatory networks: Examples from four kingdoms. PLoS ONE, 3, 6:p. e2456. doi: 10.1371/journal.pone.0002 456.
  5. Barron, M.G. (2003). Bioaccumulation and bioconcentration in aquatic organisms. In D.J. Hoffman, B.A. Rattner, G.A. Burton, and J. Cairns, eds., Handbook of Ecotoxicology, pp. 877–892. Lewis Publishers, Boca Raton, Florida.
  6. Basaran, C., Lin, M., and Ye, H. (2003). A thermodynamic model for electrical current induced damage. International Journal of Solids and Structures, 40, 26:pp. 7315 – 7327. ISSN 0020-7683. http://www.sciencedirect.com/science/article/pii/S0020768303004761. doi: 10.1016/j.ijsolstr.2003.08.018.
  7. van Bertalanffy, L. (2001). General system theory: Foundations, development, applications. Braziller.
  8. Bertolotti, T. and Magnani, L. (Dec 2017). Theoretical considerations on cognitive niche construction. Synthese, 194, 12:pp. 4757–4779. ISSN 1573-0964. 10.1007/s11229-016-1165-2. doi: 10.1007/s11229-016-1165 -2.
  9. Bich, L. and Bocchi, G. (2012). Emergent processes as generation of discontinuities. In Methods, models, simulations and approaches towards a general theory of change., pp. 135–146. World Scientific, Singapore. doi: 10.1142/97898143833320009.
  10. Braun, E. (2015). The unforeseen challenge: from genotype-to-phenotype in cell populations. Reports on Progress in Physics, 78, 3:p. 036602. doi: 10.1088/0034-48 85/78/3/036602.
  11. Brown, A. and et al (2011). Media use by children younger than 2 years. Pediatrics, 128, 5:pp. 1040–1045. ISSN 0031-4005. http://pediatrics.aappublications.org/content/128/5/1040. doi: 10.1542/peds.2011-1753.
  12. Browne, M.A., Crump, P., Niven, S.J., Teuten, E., Tonkin, A., Galloway, T., and Thompson, R. (2011). Accumulation of microplastic on shorelines woldwide: Sources and sinks. Environmental Science & Technology, 45, 21:pp. 9175–9179. PMID: 21894925, PDF. 10.1021/es201811s. doi: 10.1021/es201811s.
  13. Bryant, M., Khonsari, M., and Ling, F. (2008). On the thermodynamics of degradation. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 464, 2096:pp. 2001–2014. PDF. https://royalsocietypublishing.org/doi/abs/10.1098/rspa.2007.0371. doi: 1 0.1098/rspa.2007.0371.
  14. Buonsante, P., Franzosi, R., and Smerzi, A. (2016). On the dispute between boltzmann and gibbs entropy. Annals of Physics, 375:pp. 414 – 434. ISSN 0003-4916. http://www.sciencedirect.com/science/article/pii/S0003491616302342. doi: 10.1016/j.aop.2016.10.017.
  15. Chibbaro, S., Rondoni, L., and Vulpiani, A. (2014). Reductionism, Emergence and levels of reality. Springer. ISBN 978-3-319-06361-4. doi: 10.1007/978-3-319-06361 -4.
  16. Corre, G., Stockholm, D., Arnaud, O., Kaneko, G., Viñuelas, J., Yamagata, Y., Neildez-Nguyen, T.M.A., Kupiec, J.J., Beslon, G., Gandrillon, O., and Paldi, A. (12 2014). Stochastic fluctuations and distributed control of gene expression impact cellular memory. PLOS ONE, 9, 12:pp. 1–22. 10.1371/journal.pone.0115574. doi: 10.1371/journal.pone. 0115574.
  17. Dadachova, E., Bryan, R.A., Huang, X., Moadel, T., Schweitzer, A.D., Aisen, P., Nosanchuk, J.D., and Casadevall, A. (05 2007). Ionizing radiation changes the electronic properties of melanin and enhances the growth of melanized fungi. PLOS ONE, 2, 5:pp. 1–13. 10.1371/journal.pone.0000457. doi: 10.1371/journal.pone.0000457.
  18. David, L., Ben-Harosh, Y., Stolovicki, E., Moore, L.S., Nguyen, M., Tamse, R., Dean, J., Mancera, E., Steinmetz, L.M., and Braun, E. (2013). Multiple genomic changes associated with reorganization of gene regulation and adaptation in yeast. Molecular Biology and Evolution, 30, 7:pp. 1514–1526. doi: 10.1093/molbev/mst071.
  19. Davis, J., Moulton, A.A., Van Sant, L., and Williams, B. (2019). Anthropocene, capitalocene, … plantationocene?: A manifesto for ecological justice in an age of global crises. Geography Compass, 13, 5:p. e12438. E12438 GECO-1180.R1, PDF. https://onlinelibrary.wiley.com/doi/abs/10.1111/gec3.12438. doi: 10.1111 /gec3.12438.
  20. D.Scott, S., Holland, H.D., and Turekian, K.K., eds. (2014). Geochemistry of Mineral Deposits, vol. 13 of Treatise on Geochemistry. Elsevier, Oxford, 2nd edn.. ISBN 978-0-08-098300-4.
  21. Gatti, R.C., Fath, B., Hordijk, W., Kauffman, S., and Ulanowicz, R. (2018). Niche emergence as an autocatalytic process in the evolution of ecosystems. Journal of Theoretical Biology, 454:pp. 110 – 117. ISSN 0022-5193. http://www.sciencedirect.com/science/article/pii/S0022519318302856. doi: 10.1016/j.jtbi.2018.05.038.
  22. Gayon, J. and Montévil, M. (May 2017). Repetition and Reversibility in Evolution: Theoretical Population Genetics, pp. 275–314. Springer International Publishing, Cham. ISBN 978-3-319-53725-2. PDF. https://montevil.org/publications/chapters/2017-GM-Repetition-Reversibility-Evolution/. doi: 10.1007/978-3-319-53725-213.
  23. Georgescu-Roegen, N. (1993). The entropy law and the economic problem. In Valuing the Earth: Economics, ecology, ethics, pp. 75–88. MIT Press Cambridge, MA.
  24. Goldstein, S., Lebowitz, J.L., Tumulka, R., and Zanghì, N. (2020). Gibbs and Boltzmann Entropy in Classical and Quantum Mechanics, chap. Chapter 14, pp. 519–581. PDF. https://www.worldscientific.com/doi/abs/10.1142/97898112117200014. doi: 10.1142/97898112117200014.
  25. Haraway, D. (05 2015). Anthropocene, Capitalocene, Plantationocene, Chthulucene: Making Kin. Environmental Humanities, 6, 1:pp. 159–165. ISSN 2201-1919. PDF. 10.1215/22011919-3615934. doi: 10.1215/22011919-3615934.
  26. Heinrich, C. and Candela, P. (2014). Fluids and Ore Formation in the Earth’s Crust, vol. 13: Geochemistry of Mineral Deposits of Treatise on Geochemistry, chap. 1, pp. 1 – 28. Elsevier, Oxford, 2nd edn.. ISBN 978-0-08-098300-4. doi: 10.1016/B978-0- 08-095975-7.01101-3.
  27. Kaern, M., Elston, T.C., Blake, W.J., and Collins, J.J. (2005). Stochasticity in gene expression: from theories to phenotypes. Nature Reviews Genetics, 6, 6:pp. 451–464. doi: 10.1038/nrg1615.
  28. Kauffman, S.A. (1993). The origins of order: Self organization and selection in evolution. Oxford University Press, New York.
  29. Kauffman, S. (2002). Investigations. Oxford University Press, USA, New York. ISBN 9780195121056.
  30. Kauffman, S.A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press, New York.
  31. Kirchhoff, M., Parr, T., Palacios, E., Friston, K., and Kiverstein, J. (2018). The markov blankets of life: autonomy, active inference and the free energy principle. Journal of The Royal Society Interface, 15, 138:p. 20170792. PDF. https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2017.0792. doi: 1 0.1098/rsif.2017.0792.
  32. Kupiec, J. (1983). A probabilistic theory of cell differentiation, embryonic mortality and dna c-value paradox. Specul. Sci. Techno., 6:pp. 471–478.
  33. Lestas, I., Vinnicombe, G., and Paulsson, J. (Sep 2010). Fundamental limits on the suppression of molecular fluctuations. Nature, 467:pp. 174 EP –. doi: 10.1038/nat ure09333.
  34. Letelier, J.C., Marin, G., and Mpodozis, J. (2003). Autopoietic and (m,r) systems. Journal of Theoretical Biology, 222, 2:pp. 261 – 272. ISSN 0022-5193. doi: 10.1016 /S0022-5193(03)00034-1.
  35. Longo, G. (Sep 2018). How future depends on past and rare events in systems of life. Foundations of Science, 23, 3:pp. 443–474. ISSN 1572-8471. 10.1007/s10699-017-9535-x. doi: 10.1007/s10699-017-9535 -x.
  36. Longo, G., Montévil, M., Sonnenschein, C., and Soto, A.M. (Dec 2015). In search of principles for a theory of organisms. Journal of biosciences, 40, 5:pp. 955–968. ISSN 0973-7138. 26648040[pmid], PDF. https://montevil.org/publications/articles/2015-LMS-Search-Principles-Organisms/. doi: 10.1007/s12038-015-9574-9.
  37. Longo, G. and Montévil, M. (apr 2013). Extended criticality, phase spaces and enablement in biology. Chaos, Solitons & Fractals, 55, 0:pp. 64–79. ISSN 0960-0779. PDF. https://montevil.org/publications/articles/2013-LM-Extended-Criticality-Enablement/. doi: 10.1016/j.chaos.2013.03.008.
  38. Longo, G. and Montévil, M. (2014a). Biological order as a consequence of randomness: Anti-entropy and symmetry changes. In Perspectives on Organisms, Lecture Notes in Morphogenesis, pp. 215–248. Springer Berlin Heidelberg. ISBN 978-3-642-35937-8. doi: 10.1007/978-3-642-35938-59.
  39. Longo, G. and Montévil, M. (January 2014b). Perspectives on Organisms: Biological time, symmetries and singularities. Lecture Notes in Morphogenesis. Springer, Heidelberg. ISBN 978-3-642-35937-8. https://montevil.org/publications/books/2014-LM-Perspectives-Organisms/. doi: 10.1007/978-3-642-35938-5.
  40. Longo, G., Montévil, M., and Kauffman, S. (July 2012). No entailing laws, but enablement in the evolution of the biosphere. In Genetic and Evolutionary Computation Conference. GECCO’12, ACM, New York, NY, USA. PDF. https://montevil.org/publications/chapters/2012-LMK-No-Entailing-Laws-Enablement/. doi: 10.1145/2330784.2330946.
  41. Lotka, A.J. (1945). The law of evolution as a maximal principle. Human Biology, 17, 3:pp. 167–194.
  42. Maher, K. and Yazami, R. (2014). A study of lithium ion batteries cycle aging by thermodynamics techniques. Journal of Power Sources, 247:pp. 527 – 533. ISSN 0378-7753. http://www.sciencedirect.com/science/article/pii/S0378775313014018. doi: 10.1016/j.jpowsour.2013.08.053.
  43. Marcelli, D., Bossière, M.C., and Ducanda, A.L. (2018). Plaidoyer pour un nouveau syndrome « exposition précoce et excessive aux écrans » (epee). Enfances & Psy, 79, 3:pp. 142–160. https://www.cairn.info/revue-enfances-et-psy-2018-3-page-142.htm. doi: 1 0.3917/ep.079.0142.
  44. Memmott, J., Craze, P.G., Waser, N.M., and Price, M.V. (2007). Global warming and the disruption of plant–pollinator interactions. Ecology Letters, 10, 8:pp. 710–717. doi: 10.1111/j.1461-0248.2007.01061.x.
  45. Meyerovich, M., Mamou, G., and Ben-Yehuda, S. (Jun 2010). Visualizing high error levels during gene expression in living bacterial cells. Proceedings of the National Academy of Sciences of the United States of America, 107, 25:pp. 11543–11548. ISSN 1091-6490. PMC2895060[pmcid]. https://www.ncbi.nlm.nih.gov/pubmed/20534550. doi: 10.1073/pnas.09129891 07.
  46. Miquel, P.A. and Hwang, S.Y. (2016). From physical to biological individuation. Progress in Biophysics and Molecular Biology, 122, 1:pp. 51 – 57. ISSN 0079-6107. doi: 10.1016/j.pbiomolbio.2016.07.002.
  47. Montévil, M. (April 2019a). Measurement in biology is methodized by theory. Biology & Philosophy, 34, 3:p. 35. ISSN 1572-8404. PDF. https://montevil.org/publications/articles/2019-Montevil-Measurement-Biology-Theory/. doi: 10.1007/s10539-019-9687-x.
  48. Montévil, M. (November 2019b). Possibility spaces and the notion of novelty: from music to biology. Synthese, 196, 11:pp. 4555–4581. ISSN 1573-0964. PDF. https://montevil.org/publications/articles/2019-Montevil-Possibility-Spaces-Novelty/. doi: 10.1007/s11229-017-1668-5.
  49. Montévil, M. (2019c). Which first principles for mathematical modelling in biology? Rendiconti di Matematica e delle sue Applicazioni, 40:pp. 177–189. PDF. https://montevil.org/publications/articles/2019-Montevil-First-Principles-Biology/.
  50. Montévil, M. (July 2020). Historicity at the heart of biology. Theory in Biosciences. ISSN 1611-7530. PDF. https://montevil.org/publications/articles/2020-Montevil-Historicity-Heart-Biology/. doi: 10.1007/s12064-020-00320-8.
  51. Montévil, M. and Mossio, M. (may 2015). Biological organisation as closure of constraints. Journal of Theoretical Biology, 372:pp. 179–191. ISSN 0022-5193. PDF. https://montevil.org/publications/articles/2015-MM-Organisation-Closure-Constraints/. doi: 10.1016/j.jtbi.2015.02.029.
  52. Montévil, M. and Mossio, M. (jun 2020). The identity of organisms in scientific practice: Integrating historical and relational conceptions. Frontiers in Physiology, 11:p. 611. ISSN 1664-042X. PDF. https://montevil.org/publications/articles/2020-MM-Identity-Organism/. doi: 10.3389/fphys.2020.00611.
  53. Montévil, M., Mossio, M., Pocheville, A., and Longo, G. (aug 2016). Theoretical principles for biology: Variation. Progress in Biophysics and Molecular Biology, 122, 1:pp. 36–50. ISSN 0079-6107. PDF. https://montevil.org/publications/articles/2016-MMP-Theoretical-Principles-Variation/. doi: 10.1016/j.pbiomolbio.2016.08.005.
  54. Montévil, M., Stiegler, B., Longo, G., Soto, A.M., and Sonnenschein, C. (June 2020). Anthropocène, exosomatisation et néguentropie, chap. 1, pp. 57–80. Les liens qui libèrent. ISBN 9791020908575. PDF. https://montevil.org/publications/chapters/2020-MSL-Anthropocene-Exosomatisation-Negentropie/.
  55. Moore, J.W., ed. (2016). Anthropocene or Capitalocene? Nature, History, and the Crisis of Capitalism. PM Press.
  56. Mora, T. and Bialek, W. (2011). Are biological systems poised at criticality? Journal of Statistical Physics, 144:pp. 268–302. ISSN 0022-4715. doi: 10.1007/s10955-011- 0229-4.
  57. Morellato, L.P.C., Alberton, B., Alvarado, S.T., Borges, B., Buisson, E., Camargo, M.G.G., Cancian, L.F., Carstensen, D.W., Escobar, D.F., Leite, P.T., Mendoza, I., Rocha, N.M., Soares, N.C., Silva, T.S.F., Staggemeier, V.G., Streher, A.S., Vargas, B.C., and Peres, C.A. (2016). Linking plant phenology to conservation biology. Biological Conservation, 195:pp. 60 – 72. ISSN 0006-3207. doi: 10.1016/j.biocon. 2015.12.033.
  58. Mosseri, R. and Catherine, J., eds. (2013). L’énergie à découvert. CNRS Éditions, Paris.
  59. Mossio, M., Montévil, M., and Longo, G. (aug 2016). Theoretical principles for biology: Organization. Progress in Biophysics and Molecular Biology, 122, 1:pp. 24–35. ISSN 0079-6107. PDF. https://montevil.org/publications/articles/2016-MML-Theoretical-Principles-Organization/. doi: 10.1016/j.pbiomolbio.2016.07.005.
  60. Nicolis, G. and Prigogine, I. (1977). Self-organization in non-equilibrium systems. Wiley, New York.
  61. Odling-Smee, F.J., Laland, K.N., and Feldman, M.W. (2003). Niche construction: the neglected process in evolution. Princeton University Press. ISBN 9780691044378.
  62. Pocheville, A. (2010). What niche construction is (not). http://hal.upmc.fr/tel-00715471/.
  63. Rafferty, N.E., CaraDonna, P.J., and Bronstein, J.L. (2015). Phenological shifts and the fate of mutualisms. Oikos, 124, 1:pp. 14–21. doi: 10.1111/oik.01523.
  64. Ripple, W.J., Wolf, C., Newsome, T.M., Galetti, M., Alamgir, M., Crist, E., Mahmoud, M.I., Laurance, W.F., and 15, .s.s.f..c. (2017). World scientists’ warning to humanity: A second notice. BioScience, 67, 12:pp. 1026–1028. doi: 10.1093/bio sci/bix125.
  65. Robbirt, K., Roberts, D., Hutchings, M., and Davy, A. (2014). Potential disruption of pollination in a sexually deceptive orchid by climatic change. Current Biology, 24, 23:pp. 2845 – 2849. ISSN 0960-9822. doi: 10.1016/j.cub.2014.10.033.
  66. Rogge, W.F., Hildemann, L.M., Mazurek, M.A., Cass, G.R., and Simoneit, B.R.T. (1993). Sources of fine organic aerosol. 3. road dust, tire debris, and organometallic brake lining dust: roads as sources and sinks. Environmental Science & Technology, 27, 9:pp. 1892–1904. PDF. 10.1021/es00046a019. doi: 10.1021 /es00046a019.
  67. Rosen, R. (1991). Life itself: a comprehensive inquiry into the nature, origin, and fabrication of life. Columbia University Press, New York.
  68. Rovelli, C. (2017). Is Time’s Arrow Perspectival?, p. 285–296. Cambridge University Press. doi: 10.1017/9781316535783.015.
  69. Schrödinger, E. (1944). What Is Life? Cambridge University Press, Londre.
  70. Sethna, J.P. (2006). Statistical mechanics: Entropy, order parameters, and complexity. Oxford University Press, New York. ISBN 0198566778.
  71. Shannon, C.E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27:p. 379–423.
  72. Soto, A.M., Longo, G., Noble, D., Perret, N., Montévil, M., Sonnenschein, C., Mossio, M., Pocheville, A., Miquel, P.A., and Hwang, S.Y. (October 2016). From the century of the genome to the century of the organism: New theoretical approaches. Progress in Biophysics and Molecular Biology, Special issue, 122, 1:pp. 1–82. https://montevil.org/publications/articles/2016-SLN-Century-Organism/.
  73. Soto, A.M., Sonnenschein, C., and Miquel, P.A. (2008). On physicalism and downward causation in developmental and cancer biology. Acta Biotheoretica, 56, 4:pp. 257–274. doi: 10.1007/s10441-008-9052-y.
  74. Stevenson, T.J., Visser, M.E., Arnold, W., Barrett, P., Biello, S., Dawson, A., Denlinger, D.L., Dominoni, D., Ebling, F.J., Elton, S., Evans, N., Ferguson, H.M., Foster, R.G., Hau, M., Haydon, D.T., Hazlerigg, D.G., Heideman, P., Hopcraft, J.G.C., Jonsson, N.N., Kronfeld-Schor, N., Kumar, V., Lincoln, G.A., MacLeod, R., Martin, S.A.M., Martinez-Bakker, M., Nelson, R.J., Reed, T., Robinson, J.E., Rock, D., Schwartz, W.J., Steffan-Dewenter, I., Tauber, E., Thackeray, S.J., Umstatter, C., Yoshimura, T., and Helm, B. (Oct 2015). Disrupted seasonal biology impacts health, food security and ecosystems. Proc Biol Sci, 282, 1817:pp. 20151453–20151453. ISSN 1471-2954. doi: 10.1098/rspb.2015.1453.
  75. Stiegler, B. (2018). The neganthropocene. Open Humanites Press.
  76. Stiegler, B. (2019). The Age of Disruption: Technology and Madness in Computational Capitalism. Polity Press, Cambridge, UK. ISBN 9781509529278.
  77. Stiegler, B., Collectif Internation, Clezio, J., and Supiot, A. (2020). Bifurquer: L’absolue nécessité. Les liens qui libèrent. ISBN 9791020908575. http://www.editionslesliensquiliberent.fr/livre-Bifurquer-609-1-1-0-1.html.
  78. Stiegler, B. and Ross, D. (2017). What is called caring?: Beyond the anthropocene. Techné: Research in Philosophy and Technology. doi: 10.5840/techne201712479.
  79. do Sul, J.A.I. and Costa, M.F. (2014). The present and future of microplastic pollution in the marine environment. Environmental Pollution, 185:pp. 352 – 364. ISSN 0269-7491. http://www.sciencedirect.com/science/article/pii/S0269749113005642. doi: 10.1016/j.envpol.2013.10.036.
  80. Templeton, A.R., Robertson, R.J., Brisson, J., and Strasburg, J. (2001). Disrupting evolutionary processes: The effect of habitat fragmentation on collared lizards in the missouri ozarks. Proceedings of the National Academy of Sciences, 98, 10:pp. 5426–5432. ISSN 0027-8424. https://www.pnas.org/content/98/10/5426. doi: 1 0.1073/pnas.091093098.
  81. Turing, A.M. (1952). The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 237, 641:pp. 37–72. doi: 10.1098/rstb.1952.0012.
  82. Varela, F., Maturana, H., and Uribe, R. (1974). Autopoiesis: The organization of living systems, its characterization and a model. Biosystems, 5, 4:pp. 187 – 196. ISSN 0303-2647. doi: 10.1016/0303-2647(74)90031-8.
  83. Visser, M., Caro, S., Van Oers, K., Schaper, S., and Helm, B. (2010). Phenology, seasonal timing and circannual rhythms: towards a unified framework. Philosophical Transactions of the Royal Society B: Biological Sciences, 365, 1555:pp. 3113–3127. doi: 10.1098/rstb.2010.0111.
  84. Williams, B.L., Brawn, J.D., and Paige, K.N. (2003). Landscape scale genetic effects of habitat fragmentation on a high gene flow species: Speyeria idalia (nymphalidae). Molecular Ecology, 12, 1:pp. 11–20. PDF. https://onlinelibrary.wiley.com/doi/abs/10.1046/j.1365-294X.2003.01700.x. doi: 10.1046/j.1365-294X.2003.01700.x.
  85. Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Łukasz Kaiser, Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., and Dean, J. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. http://arxiv.org/abs/1609.08144.
  86. Young, J.T. (1991). Is the entropy law relevant to the economics of natural resource scarcity? Journal of Environmental Economics and Management, 21, 2:pp. 169 – 179. ISSN 0095-0696. http://www.sciencedirect.com/science/article/pii/009506969190040P. doi: 1 0.1016/0095-0696(91)90040-P.
  87. Zoeller, R.T., Brown, T.R., Doan, L.L., Gore, A.C., Skakkebaek, N.E., Soto, A.M., Woodruff, T.J., and Vom Saal, F.S. (2012). Endocrine-disrupting chemicals and public health protection: A statement of principles from the endocrine society. Endocrinology, 153, 9:pp. 4097–4110. http://press.endocrine.org/doi/abs/10.1210/en.2012-1422. doi: 10.1210/en.2012-1422.

1This efficiency is defined as the work produced divided by the heat taken from the warm source.

2The concept of a time arrow is somewhat abstract. Intuitively, there is a time arrow if we can tell whether a movie is played forward or backward by fundamental principles (Gayon and Montévil2017).

3We put radioactive elements aside because radioactivity leads to the fission of atoms, thus their destruction.