thermodynamics

Thermodynamics

Taking a pragmatic stance, thermodynamic theory provides a consistent frame of reference that no other theory can provide: an agreed upon direction of time. This directionality of time allows us to go beyond the constraints imposed by the maze of self-referential circularity and complexity found in most realistic systems of interest. 

A thermodynamic analysis is explicitly a macroscopic (probabilistic) approach that embraces the spatial, temporal and organisational complexity of systems, and so avoids the majority of difficulties associated with a reductionistic approach. However, the price of this simplicity (reduced degrees of freedom) is loss of the details: mechanisms and processes. Until both the micro- and the macro- are shown to be convergent, there will be little advancement.

The thermodynamic perspective

The first well-known attempt at a thermodynamic understanding of ecological patterns and interactions is attributable to Lotka (1922). Lotka's work was based upon the earlier writings of Boltzmann (1905) who had restated the Darwinian principle of Natural selection in energetic terms: that living systems struggle for free energy. Lotka (1922) reformulated this idea further into his "Maximum power principal":

Natural selection tends to make the energy flux through the system a maximum, as much as possible.

In other words, when there is a resource available, it will be used. Organisms are selectively advantaged if they can more efficiently use these resources. The most probable consequence of this process was that the total amount of energy passing through the biota would be as large as possible. Of course, one must parenthetically recognise first, the uneasy circular relationship that exists between that which is a resource and that which is selectively advantaged (as mentioned in the previous section); second, the rather open-ended nature of the constraints referred to by the clause, "as much as possible"; and third, the potential for higher-order interactions (indirect interactions; Forbes 1880, Paine 1974, Patten 1983, Smith et al. 1997) to modulate such simplistic expectations (e.g., via the extinction or extirpation of a focal or "keystone" species). This general line of reasoning has continued through the work of Margalef (1963), Odum and Pinkerton (1955), Matsumo (1978), Johnson (1981,1994), Odum (1983) and Schneider and Kay (1994); and today persists particularly in the fisheries and bioenergetics literature, as it provides a much-needed integrative and unifying framework (Ware 1982, Bryan et al. 1990, Lin 1995).

Schrödinger (1945) popularised these ideas as the principle of "negative" entropy or "negentropy" when he explicitly pointed out the "anti-entropic" nature of life that was implicit in Boltzmann's and Lotka's perspectives. This "negentropic" principle describes the tendency for living organisms to become more ordered and to maintain that order for a time (i.e., decrease their local entropy) against the universal pattern of the Second Law of Thermodynamics for order to be destroyed with time (entropy to increase). Living organisms accomplish this apparently anti-entropic feat (locally) by actively "exporting" excess disorder at the expense of the universe (globally). That is, living systems "create" a local fluctuation (i.e., a local reversal) in the global action of the Second Law of Thermodynamics.

It has now become clear that many non-living dynamical systems (chemical reactions, turbulent mixing, weather patterns) also demonstrate such local fluctuations in the action of the Second Law---like the small eddies that flow counter to the currents in flowing waters. While these patterns are empirically (i.e., phenomenologically) well known, it is not at all clear what may be causing these local fluctuations in the action of the Second Law

Prigogine (1947) was one of the first to suggest a formal (mathematical) cause of this anti-entropic tendency of some systems---the "Least specific dissipation" (LSD; see Appendices 1 and 2C) principle. However, the dependence of this principle upon a linearised description of thermodynamic processes infinitesimally close to "thermodynamic equilibrium," limited the scope of its application and so its general acceptance.

Continuing to explore these lines of the Darwinian - Boltzmann - Lotka school, Glansdorff and Prigogine (1971) suggested that this local fluctuation in the action of the Second Law is attributable to the presence of local fluctuations of free energy densities (i.e., statistical asymmetries or inhomogeneities) that become coherent flows of energy and matter. They called this mechanism for this autocatalytic creation of ordered "dissipative structures" (e.g., the "anti-entropic" eddies, above): the "Order through fluctuation" (OTF) scenario. Such ordered flows are never completely efficient due to the Second law of thermodynamics and so they act to dissipate the very same thermodynamic gradients that gave rise to them (and thus the name, "Dissipative structures"). As such, Glansdorff and Prigogine (1971) suggested the OTF scenario to be a phenomenological mechanism for the creation of order (e.g., the "anti-entropic" eddies, above) and the consequential maintenance of some local quasi-steady state where LSD-like conditions apply.

In other words, the local fluctuations in the action of the Second Law are attributed to local fluctuations in free energy gradients. What causes these fluctuations in free energy gradients is of course not addressed in any detail beyond attributing them to chance anomalies, as any such attempt would lead to another circular/self-referential loop in causation: local fluctuations in the Second Law are caused by local fluctuations in free energy gradients which are in turn caused by local fluctuations in the Second Law, etc. This question is taken up again in Chapters 3 and 6 (of my PHD thesis), where an alternate (hierarchical, fractal-like) framework is used in an attempt to lessen this circularity of causation.

Linear thermodynamics

The linear nature of Prigogine's (1947) original proof mentioned above represents a point of concern that needs clarification. The main limitation to this approach is the formal (mathematical) requirement of linearisability of the thermodynamic flows and fluxes and the use of the "Onsager reciprocity relations" which requires a microscopic reversibility of interactions.

The basis of the "Onsager reciprocity relations" is the assumption of microscopic reversibility of molecular processes (i.e., a time symmetry of statistical microscopic fluctuations). However, in its application to generalised compartmentalised systems (so called, "black boxes" in ecological modelling) this reversibility can be assured by a judicious choice of the space-time scales of reference. That is, it can be treated as a scalar (anisotropic) problem rather than a vectorial (isotropic) one. Within the context of the algebra utilised in Chapter 3, the application of Onsager's principle is tantamount to expecting useful free energy flows from system A to B to have a proportional effect upon the useful free energy flows from system B to A---that is, some recycling of energy. This is an assumption that under most circumstances may be considered quite reasonable, given the empirical results of Winberg (1972), Pomeroy (1974), Platt et al. (1984), Fath and Patten (1998) and many others that have repeatedly demonstrated the importance of matter and energy cycling (i.e., synergism).

Ignoring the non-linear terms that are customarily removed in linearisation techniques is problematic in highly nonlinear systems as their influence upon the time-evolution of a system can be quite important. However, including such non-linear terms makes intractable the solution and traditional stability analyses (e.g., Liapunov's methods) of the thermodynamic "equations of motion" (sensu Denbigh 1951). This is especially the case as current non-linear theory has shown that the "stability" of systems can be quite complex (e.g., see Byers and Hansell 1992, 1996).

However, regardless of the formal (mathematical) assumptions of the approach, the "Least specific dissipation principle" has a wide range of empirical applicability. The range of this applicability can only be determined via empirical studies (e.g., Denbigh 1951, Spanner 1964:234, Katchalsky and Curran 1967). The empirical regularities illustrated for photosynthetic pathways by Spanner (1964); membrane dynamics by Peusner (1970, 1986) and Mikulecky (1977, 1985); developmental patterns by Zotin and Zotina (1967), Lurié and Wagensberg (1979) and Briedis and Seagrave (1984); and population and larger level patterns by Johnson (1994), Gladyshev (1978, 1997:17), all suggest that the linear theory may have some applicability to biological systems and so cannot be rejected without consideration of empirical data.

The perspective adopted here is to acknowledge the importance of nonlinearity (and pseudo-nonlinearity) but not to assume that it renders every question unanswerable. A case in point is the use of linearisation techniques to characterise a system. The appropriateness of this use varies depending upon the degree of nonlinearity of the region of analysis and the presence of critical points (singularities). Exploit this varying degree of appropriateness of a linear approximation with deviation from some local quasi-steady state, as an index of the degree of non-linearity manifest in the system is possible by relaxing the constraint of the LSD principle: that systems must be infinitesimally close to thermodynamic equilibrium. By shifting the focus to local (i.e., judiciously chosen space-time scales of reference) quasi-steady ("slowly," time-varying, relative to the space-time scales of reference) states, where the dynamical equations can be arbitrarily linearised (regardless of the intrinsic non-linearity of the systems), it becomes possible to apply these concepts in a scale-free context. Under such a context, the LSD simply implies a slowing down of flows and the reduction of associated free energy gradients (i.e., "irreversibilities", sensu Professor J.J. Kay, personal communication). Such a potential was examined with the aid of empirical data. Every attempt has been made to push this approach to its limits to understand the limits of its application. In no way do we claim that this represents the sole approach to the issues treated in the thesis. However, we do claim that it is a utilitarian approach, whose real potential only further empirical work can illustrate.

 

Other thermodynamic schools of thought

There exist innumerable thermodynamic schools of thought (see Patten et al. 2002 and references therein, for a sample of the currently active and major ecological schools). Amongst these are: "Emergy" Analysis of Odum (1996), genomic (statistical) "Exergy" Analysis of Jørgensen (Jørgensen 1992), free energy "Exergy Analysis" of Schneider and Kay (1994), "Emergy and Transformity Analysis" (Odum 1996), "Power Analysis" (Odum 1983), "Entropy Analysis" (Aoki 1989, 1993), "Kullback Information Analysis" (by Svirezhev, in Patten et al. 2002), "Ascendency Analysis" Ulanowicz (1986), "Network Thermodynamic Analysis" (Peusner 1970, Mikulecky 1977), "Action Analysis" of Johnson (1994) and Vanriel and Johnson (1995) and the more information theoretic interpretations of Wicken (1980), and Brooks and Wiley (1986).

Two schools that are particularly similar to that of the Darwin-Boltzmann-Lotka-Prigogine lineage (described above), are both known as "Network thermodynamics". The first, developed by Peusner (1970, 1983, 1986) and Mikulecky (1977, 1984, 1985) uses an analogical scalar representation of systems using the language of electrical circuits (especially those of Kirchhoff's current and voltage laws) and so parallels the work of H. Odum (1983). Due to its scalar nature, the linearity assumptions are less constraining and a full non-linear theory has also been developed (Mikulecky 1977). The second approach is superficially very similar, but more explicitly topological in nature (using "bond graphs") and is due to the work of Oster, Perelson and Katchalsky (1971). In both approaches, more general (widely applicable) equivalents to Onsager's relations and Prigogine' least specific dissipation principles have been demonstrated (i.e., Tellegen's quasi-power theorem).

The work of Ulanowicz (1972, 1980, 1983), Ulanowicz and Hannon (1987) represents a third parallel development, where the network through flows and cycling topologies are dissected and information-theoretic measures of the network topology are used to describe the structure and development of ecological systems. The developmental patterns are summarised through the notion of the "ascendency" which is a combined information-theoretic measure of the network topology and energy through-flow. Patten's (1978, 1985) linear environ theory also converges upon similar approaches although the thermodynamic and information-theoretic concepts are less directly treated but with greater focus upon the unravelling of the cycles and the use of an index of cycling (synergism). All these approaches (and there are many others) rely upon a linear thermodynamic formalism to the analysis of open systems. Each has a nominal "goal function" (e.g., Jørgensen et al. 1995) that is phenomenologically derived in some fashion.

Schneider and Kay (1994) have been proponents for the re-statement of the Second law solely in terms of the degradation of exergy gradients. The advantage of such a qualitative formulation avoids the problems inherent in the treatment of nonlinearities (e.g., nonlinearity, the definition of entropy in a nonequilibrium state---but see also the next Section, and Meixner 1969, and Aoki 1989 for contrasting opinions on the immeasurability of exergy).

 

Second law analysis

Second law analysis is the "direct application of the Second Law of Thermodynamics" to the analysis of energy transformations, generally in the context of cost and efficiency optimisations in engineering applications (Gaggioli 1980). The currency that is used in this mode of analysis is exergy (the available or useful free energy). As such, Second law analysis represents a detailed accounting of how processes alter the quality of energy. This is in contrast to a "First law analysis" that is a detailed accounting of the quantity of energy flows through systems. Thus, for example, the accounting of energy flows for a given food web such as those illustrated by Lindemann (1942) represents a First law analysis (i.e., the conservation of energy). In contrast, Schneider and Kay's (1994) analysis of the change in the quality of high quality incoming solar energy (shortwave) to low quality, outgoing radiation (longwave) as it passes through various ecosystems, represents a Second law analysis.

In the biological sciences, the concept of energy quality has been present from its very beginnings (e.g., notions of food quality; niche exploitation; vitamins and essential amino acids in organismal health; critical stoichiometric elemental ratios such as the Redfield ratios, Smith 1983, Downing and McCauley 1992; relative abundance of different photosynthetic mechanisms such as C3, C4 pathways; pigments using different parts of the light spectra; and switching between detritus/grazing/predation pathways). In fact, it is due to the very large number of currencies of energy qualities, that biologists repeatedly have turned to energy as a common currency. In short, the study of any given biological system's structure and function (i.e., how they are adapted to a particular set of internal-external, structural-functional constraints) fundamentally utilises an implicit form of first and second law analyses.

The myriad ways in which such adaptations are expressed and their plasticity even within those constraints have made the biologists' work that much more delightful to narrate and that much more difficult to assimilate. To conduct an exhaustive (and quite costly) First and Second law analysis with their accompanying thermodynamic systems description would be quite informative and useful, in the narrative sense---particularly at the resolution of ecosystem levels of organisation as they are so lacking. However, exhaustive attempts at accounting flows face the same intrinsic difficulties of circularity and complexity, discussed above. Further, the complete description of all mass, energy and exergy flows into and out of the relevant systems at relevant space-time scales represents a formidable task that would be difficult to complete within the lifetime of a single researcher. Thirdly, the calculation of the "correct" thermodynamic efficiencies was not the goal of this study, although such information may be useful. Finally, there is no generally accepted way of defining the reference state from which to measure exergy. As such, how one may measure the quantity, let alone the quality of free energy entering and leaving biological systems is not a trivial task (Månsson and McGlade 1993, and Aoki 1993), especially when one accepts that spatial-temporal-organisational variations in these flows generally increase monotonically with the magnitude of these flows. For these reasons, a strict Second Law Analysis was not and could not be undertaken for the thesis.

While Second law analysis was not explicitly conducted, the analysis of metabolic rates (waste heat production) represents an implicit form of Second law analysis. This is because waste heat production from metabolic activity represents an estimate of the energy converted from high quality, biologically useful free energy to low quality, biologically less useful energy (heat). It is possible for some of this low quality heat energy to be re-used directly or indirectly by the producing system (e.g., organism), however, most of the heat leaving an organism represents a rather permanent loss. This is particularly the case at synecological levels of organisation. For this reason, metabolic rates are suggested to be a practical index of the net entropy production (irreversibilities) attributable to the activities of the system (see also, Appendix 2C). Such a focus upon metabolically induced irreversibilities represents a Second Law Analysis in the broadest sense.

As for the material wastes such as urea, faecal matter and other forms of "lost" biomass, these quantities remain part of the ecosystem until their eventual metabolic transformation into waste heat. These mass, energy and exergy flows represent short-term (one-pass) repartitionings of the free energy within the ecosystem. However, in the long-term (iterated, multi-pass) the same mass can and does recycle within the system, until their eventual metabolic passage into waste heat (i.e., entropy production; the "metabolic turnover" of Briedis and Seagrave 1984).

To conclude, the approach is fundamentally an attempt to short-circuit the limitations of a reductionistic/holistic approach by starting from a very simple and general premise and drawing from it an equally simple, but hopefully useful expectation of how systems should change. This is done by focusing upon the local irreversibilities (rather than attempting an exhaustive and impossible exergy analysis), estimated as the intensity of waste heat produced due to biological activity (i.e., the R/B ratio). This choice is justified empirically on the grounds that many important biological rates are correlated with each other (ingestion, egestion, excretion, gross primary production, net primary production and total respiration rates; e.g., see Figure 4.3). This is also evidenced in their common allometric basis (e.g., Peters 1983). As such, any one of these flows can also represent useful indices of the irreversibilities attributable to biological activity. The R/B ratio was chosen due to:

This intensity of entropy production is an index of the intensity of biological activity, much as temperature is an index of the intensity of molecular activity (see below). This index is directly related to the patterns of size-abundance due to the very strong allometric scaling of respiration rates. When examined via randomisation methods in the range of naturally observed size-abundance patterns, the mass-specific respiration rates were found to approach a minimum with scaling exponents > --1 (Figure 2.6). This means that change in the direction of less negative scaling exponents (i.e., of ever decreasing intensity of biological activity or increases in the numbers of larger-sized organisms) are thermodynamically favoured, regardless of the spatial, temporal or organisational scale of the focal system.

This thermodynamic tendency towards ever decreasing intensity of biological activity (i.e., low R/B ratios; large average size; or as it was also called in Chapter 3, enhanced local-order), embodied in the LSD principle is opposed by the effect of environmental uncertainty stemming from interactions with other systems, sub-systems and super-systems (i.e., perturbations, stressors; the degradative action of the Second Law of Thermodynamics via the OTF scenario, in Chapter 2; or "local-disorder" in Chapter 3). The empirically observed exponent of the size-abundance relationship is in a dynamic quasi-steady state that is modulated by the antagonistic interplay between the action of the LSD principle (local-order) and the OTF scenario arising out of perturbations (local-disorder). The result of these antagonistic interactions between internal and external processes are more commonly known as "successional" change in community ecology (i.e., the balance between internal growth and external perturbations); and K- and r-type life history selection in population ecology (i.e., the balance between internal growth causing a "stable" age, size and reproductive structures vs. external perturbations causing "variable" size and age structures).

Empirical tests of the utility of this integrated index of the perturbation regime were attempted by comparing the R/B ratio to dominant abiotic indices of a system's sensitivity to perturbations. Due to the complex nature of real systems, isolating a single factor is difficult unless the perturbation is dominant. The size of a lake was chosen as such a dominant characteristic. The assumption being that the larger a lake, the less sensitive it is to perturbations. This represents a relatively straightforward assumption, as large lakes are simply more robust to fluctuations due to statistical size-volume averaging effects (e.g., temperature fluctuations, chemical change). Thus, lower R/B ratios were expected in larger lakes; this prediction was confirmed in Chapter 2, indicating that the thermodynamic index (the R/B ratio) may be a functional measure of the degree of perturbation of a system.