Thermodynamics, in its generalized sense, is the theory and study of how energy transforms matter within all physical systems, from the formation of stars to photosynthesis to the running of a car.1 Thermodynamics is the branch of physical science concerned with the interrelationship and interconversion of different forms of energy, in particular, thermal energy, heat.2 On a more generalized level, we can say it is the study of how energy transforms matter through processes, the structures or order that emerge out of these transformations and how we can describe those states of order and disorder in terms of information and the capacity to do work. Put energy into a system and that system will respond, this is the nature of a thermodynamic driving force. It is a way to push and pull on a material to alter its state and structure with any type of energy. By applying some energy the system responds. Apply a pressure and see the volume change, increase the temperature and watch the material melt, decrease it and watch it solidify.3
One of the great benefits of thermodynamics is that it applies equally to physical, biological and engineered systems. Giving us an integrated framework for the rigorous modelling of both ecosystems and industrial economies, often in a quantitative fashion if need be. Through thermodynamics, we can see that the same processes shape both the development of ecosystems and our technology infrastructure.4 And it is through this understanding of thermodynamics that we engineered our physical environment. Through understanding this process we first learnt to cook, we learnt to shape metals through smelting, to produce steel, by understanding the subtleties of this process we could create different types of steel by regulating how rapidly the molten metal is cooled into a solid. It is through this understanding that we learnt to make engines that turn heat into mechanical work, to make electricity from spinning turbines and to make plastics of all form.
Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium.5 Thermodynamic equilibrium is characterized by an absence of the flow of matter or energy. More generally equilibrium is a state where the system will not change unless given some perturbation from its environment. Isolated thermodynamic systems, if not initially in thermodynamic equilibrium, as time passes, tend to evolve naturally towards thermodynamic equilibrium. In the absence of externally imposed forces they become homogeneous in their local properties.6
This condition of equilibrium is really enshrined in the zero law of thermodynamics, which states that: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other.7 The zero law is clearly a statement of equilibrium and it is this equilibrium through which we define and measure temperature, as that which ceases to flow between systems in thermal contact.
The first law in its generalized sense is a statement of the conservation of energy and matter. It posits that energy can never be created or destroyed, but it can be transformed from one form into another. This implies that the total energy of an isolated system remains constant over time. The first law tells us about the flow of energy within any physical system and that we can trace its transformation from one form to another throughout the system.
The second law of thermodynamics is an expression of the universal principle of dissipation of kinetic and potential energy observable in nature.8 The second law is an observation of the fact that over time, differences in temperature, pressure, and chemical potential tend to even out in a physical system that is isolated from the outside world. Entropy is a measure of how much this process has progressed. The entropy of an isolated system that is not in equilibrium tends to increase over time, approaching a maximum value at equilibrium.9
Entropy is a measurement of the number of degrees of freedom a system has. Take the example of a perfect crystal, in which case the atoms are all locked into rigid positions in a lattice, so the number of ways they can move around is quite limited. In a liquid, the options increase considerably, and in a gas, the atoms can take on many more configurations. This is why the entropy of a gas is higher than a liquid, which is in turn higher than a solid.
When the entropy goes up it requires more information to describe the state of the system and it would require work to be done in order to reconfigure the system into its original ordered state. As such entropy is a key measure in information theory where it quantifies the uncertainty involved in predicting the value of a random variable.10 The Second Law states that whenever energy is converted from one form to another some of the energy becomes low-level heat. This means that the conversion of energy from one form to another is never 100 percent efficient. Some of the energy is lost as heat. The ‘lost’ energy is still energy but is no longer high-level energy that can be used for work, such as moving things or fuelling metabolic processes in plants and animals. Thus the Second Law is one of the few, if not only physical laws the differentiates between the direction of time.11
The Third Law tells us that as a system approaches absolute zero in kelvin temperature, the entropy of the system approaches a minimum value and that it is impossible to reach the absolute zero of temperature by any finite number of processes.12
The Second Law defines an increase in permutations and randomness over time, but biological systems are characterized by increases in order and complexity. As a result, it is often noted that biological systems must be functioning at a state far from equilibrium. This observation has led to the extension of standard thermodynamics through the development of what is called non-equilibrium thermodynamics. A recognition that standard thermodynamics only really applies to systems at, near or moving towards some equilibrium. It is, therefore, legitimate to ask to what extent equilibrium thermodynamics can be generalized to cover more general situations of non-homogeneous systems, far from equilibrium states and irreversible processes. Many efforts have been spent to meet such objectives and have resulted in the developments of various approaches coined under the generic name of nonequilibrium thermodynamics.13
Most systems found in nature or considered in engineering are not in thermodynamic equilibrium.14 They are changing or can be triggered to change over time, and are continuously subject to fluctuations of matter and energy to and from other systems. Equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium; the time-courses of processes are deliberately ignored. But with far from equilibrium systems, the forward and reverse reaction rates no longer balance and the concentration of reactants and products is no longer constant. Damping of acoustic perturbations or shock waves are non-stationary, non-equilibrium processes, driven complex fluids, turbulent systems, and glasses are other examples of non-equilibrium systems within physics.15
Whereas standard thermodynamics describes systems that have a relatively low exchange with their environment – making them relatively closed – non-equilibrium thermodynamics is dealing with systems that are in a generalized sense more open than closed, having an almost continuous exchange with their environment meaning we cannot just describe them as moving from one equilibrium to another, but we now have to interpret them in terms of constant change, flux or flow of resources from the environment through the system.
To recognize the difference between equilibrium and non-equilibrium we can think about a ball at the bottom of a bowl, this is a system in equilibrium. Now imagine the ball rolling down a hill, it is now in a state of disequilibrium, there is a constant input or release of energy from the system as it travels across a gravitational gradient, in this processes we can say that its potential energy is being dissipated. But this ball is really just travelling from one equilibrium to another, it will get to its lowest gravitational potential energy sooner or later and then stay there. To get a truly non-equilibrium system we would need one that is continuously travelling across some energy gradient, continuously dissipating energy in the process, this is a dissipative system and it is exactly what biological creatures do. From a thermodynamic point of view, this is essentially what defines biological creatures, they are able to maintain themselves far from equilibrium by accessing free energy and dissipating it and reiterating on this process, in so doing continuously travelling across some potential energy gradient.16
Dissipative structures are open systems, they need a continual input of free energy from the environment in order to maintain the capacity to do ‘work’. It is this continual flux of energy, into and out of a dissipative structure, which leads towards self-organization and ultimately the ability to function at a state of nonequilibrium. A famous example of a self-organizing, dissipative structure is the spontaneous organization of water due to convection. If you take a thin layer of water, at a uniform temperature, and start heating it from the bottom, a pattern starts to emerge. As the temperature between the bottom and top of the water reaches a critical level, the water begins to move away from an equilibrium state and an instability within the system develops. At this point, convection commences and the dissipative structure forms. As heat is transferred through the liquid, a patterned hexagonal or ‘honeycomb’ shape emerges called Bénard cells and the capacity to do ‘work’ is realized. But, as soon as the energy source to the heat is taken away, the ordered pattern disappears and the water returns to an equilibrium state.17
Just like the convection of water, biological organisms are also self-organizing dissipative structures, they take in and give off energy to and from the environment in order to sustain life processes and in doing so function at a state of nonequilibrium. Although biological organisms maintain a state far from equilibrium they are still governed by the second law of thermodynamics. Like all physicochemical systems, biological systems are always increasing their entropy as part of the overwhelming drive towards equilibrium. In order to avoid this move towards equilibrium, they have to maintain themselves on some energy gradient. Just as the input and dissipation of energy within the water in the pan enabled the formation of the non-equilibrium pattern of convection cells, it is the constant input and dissipation of energy that enables biological creatures to exist far from equilibrium. Open dissipative systems make an effort to avoid a transition into thermodynamic equilibrium by a continuous exchange of materials and energy with the environment.18
According to the theory of dissipative structures, an open system has a capability to continuously import free energy from the environment and, at the same time, export entropy. The internal structure and development of dissipative systems, as well as the process by which they come into existence, evolve, and expire, are governed by the transfer of energy from the environment.19 Unlike isolated systems (or closed systems in a broader sense), which are always on the path to thermal equilibrium, dissipative systems have a potential to offset the increasing entropic trend by consuming energy and using it to export entropy to their environment, thus creating negative entropy or negentropy, which prevents the system from moving toward an equilibrium state. A negentropic process is, therefore, the foundation for growth and evolution in thermodynamic systems. It can be said that order in an open system can be maintained only in a non-equilibrium condition. In other words, an open dissipative system needs to maintain an exchange of energy and resources with the environment in order to be able to continuously renew itself.20
For dissipative systems to sustain their growth, they must not only increase their negentropic potential, but they must also eliminate the positive entropy that naturally accumulates over time as systems are trying to sustain themselves. The build up of the system’s internal complexity as it grows is always accompanied by the production of positive entropy, which must be exported out of the system as waste or low-grade energy. Otherwise, the accumulation of positive entropy in the system will eventually bring it to thermodynamic equilibrium, a state in which the system cannot maintain its order and organisation, in other words, what we call death within a biological organism.21
Of central interest within non-equilibrium thermodynamics is then this idea of a potential energy difference or gradient across which the system exists and this idea is captured in the term exergy. Whereas energy and entropy are central concepts within standard thermodynamics, the idea of exergy is central to non-equilibrium thermodynamics and systems ecology. It is a measure of both the system and its environment, more specifically a measure of how the system departs from its environment, the maximum amount of work that can be done before the system comes to equilibrium with its surrounding.22 As such exergy is a measurement of the potential for usefulness. Because biological systems are largely defined by this capacity to maintain themselves far from equilibrium we can theoretically understand how alive they are by asking how far from equilibrium they are, and this is what exergy tries to capture. Thus it is often used as a basis for the analysis to the health or resilience of an ecosystem; when exergy is zero the system is in equilibrium with its environment, which would equate to the complete collapse of an ecology.23
Whereas most of the laws governing physical systems are theoretically time reversible. Dissipative processes are path-dependent and irreversible. A non-equilibrium state needs for its description time and space-dependent state variables, because of exchanges of mass and energy between the system and its surroundings. An irreversible process is one in which free energy is dissipated, how the process was performed comes to matter.24 This time irreversibility is closely related to efficiency. The destruction of exergy is closely related to the creation of entropy and as such, any system containing highly irreversible processes will have a low energy efficiency. As an example, the combustion process inside a power station’s gas turbine is highly irreversible and approximately 25% of the exergy input will be destroyed here.25