Kurzgesagt – In a Nutshell 

Sources – Boltzmann Brain


Thanks to our expert: 

Johns Hopkins University and Santa Fe Institute

–As far as you know, you are real and exist in a universe that was born 14 billion years ago and that gave rise to galaxies, stars, the Earth, and finally you.

The latest estimates of the present age of the Universe based on the properties of the cosmic microwave background indicate that our Universe is about 13.8 billions years old:


#Planck Collaboration (2020): “Planck 2018 results VI. Cosmological parameters”. Astronomy & Astrophysics, vol. 641

https://www.aanda.org/articles/aa/full_html/2020/09/aa33910-18/aa33910-18.html

This video will deal with such huge time-spans that exact figures will be irrelevant. For the sake of simplicity, from now on we’ll round off the estimated present age of the universe from 13.8 to 14 billion years.



Except, maybe not. You may actually not exist for real but be the dream of a dead universe – you and everything you think that exists. Crazy as it sounds, this may be an unavoidable consequence of our best scientific theories about the universe.

 

The idea explained in this video is known in the scientific literature with the name of “Boltzmann Brains” and it deals with the physical properties of the extremely far future of a universe dominated by a cosmological constant (to be explained below). In its modern incarnation, this line of research was initiated in part with some observations made in 2002 about the future of our universe and the probability that a universe like the one we see could exist at all:


#Dyson, Lisa et al. (2002): “Disturbing Implications of a Cosmological Constant”. Journal of High Energy Physics, vol. 2002.

https://iopscience.iop.org/article/10.1088/1126-6708/2002/10/011/meta   

Quote: “We are forced to conclude that in a recurrent world like de Sitter space our universe would be extraordinarily unlikely.”


And the germ of the idea can be traced back to some hypothesis raised in the 19th century by Ludwig Boltzmann, the discoverer of the statistical laws underlying the laws of thermodynamics. In 1897 (i.e years before the advent of modern cosmology, and years before physicists learned that the universe had a beginning), Boltzmann speculated that our universe could have emerged as a spontaneous fluctuation in a thermally dead universe:

#Barrow, John and Tipler, Frank (1986): The Anthropic Cosmological Principle, Oxford University Press.

https://global.oup.com/academic/product/the-anthropic-cosmological-principle-9780192821478?cc=de&lang=en&

A good summary of what in the last years has become known as the “Boltzmann Brain problem” is given in the following article:

#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “Some modern cosmological models predict the appearance of Boltzmann Brains: observers who randomly fluctuate out of a thermal bath rather than naturally evolving from a low-entropy Big Bang. A theory in which most observers are of the Boltzmann Brain type is generally thought to be unacceptable, although opinions differ. I argue that such theories are indeed unacceptable: the real problem is with fluctuations into observers who are locally identical to ordinary observers, and their existence cannot be swept under the rug by a choice of probability distributions over observers. The issue is not that the existence of such observers is ruled out by data, but that the theories that predict them are cognitively unstable: they cannot simultaneously be true and justifiably believed.”



Put a drop of red ink in a glass of water and you see the ink spread until it fills the container but never the opposite: colored water where ink spontaneously concentrates and becomes a drop at the surface again. Time always seems to flow in the direction in which the ink spreads.


This everyday experience is explained by the Second Law of thermodynamics, a universal law obeyed by all known physical systems. In simple terms, this law states that, in every naturally occurring process, the total entropy always increases with time. The “entropy” of a system is a somewhat abstract quantity that in most situations can be thought of as quantifying the “disorder” of a system. Generally speaking, a more disordered configuration (like having all the ink spread around filling the glass) has a higher entropy. A more ordered configuration (like having all the ink concentrated at one spot) will have a lower entropy. Although strictly speaking this is an overgeneralization, in order to fix ideas, in the following we’ll stick to this analogy between “higher entropy ~ order” and “lower entropy ~ disorder”. 


Since the Second Law of thermodynamics states that entropy always increases with time, this basically means that the disorder in the universe always grows. That’s why, in an example like the one mentioned above, we always see the ink spontaneously spreading and filling the glass, but never the opposite. Another familiar example is that of heat flow: heat always flows from a warm body to a cold one, never in the opposite direction:  


#NASA (2021): “Second Law of Thermodynamics”. Glenn Research Center (retrieved 2023)
https://www.grc.nasa.gov/www/k-12/airplane/thermo2.html 

Quote: “The first law of thermodynamics defines the relationship between the various forms of energy present in a system (kinetic and potential), the work which the system performs and the transfer of heat. The first law states that energy is conserved in all thermodynamic processes.


We can imagine thermodynamic processes which conserve energy but which never occur in nature. For example, if we bring a hot object into contact with a cold object, we observe that the hot object cools down and the cold object heats up until an equilibrium is reached. The transfer of heat goes from the hot object to the cold object. We can imagine a system, however, in which the heat is instead transferred from the cold object to the hot object, and such a system does not violate the first law of thermodynamics. The cold object gets colder and the hot object gets hotter, but energy is conserved. Obviously we don't encounter such a system in nature and to explain this and similar observations, thermodynamicists proposed a second law of thermodynamics. Clasius, Kelvin, and Carnot proposed various forms of the second law to describe the particular physics problem that each was studying. The description of the second law stated on this slide was taken from Halliday and Resnick's textbook, "Physics". It begins with the definition of a new state variable called entropy. Entropy has a variety of physical interpretations, including the statistical disorder of the system, but for our purposes, let us consider entropy to be just another property of the system, like enthalpy or temperature. [...]

The second law states that if the physical process is irreversible, the combined entropy of the system and the environment must increase. The final entropy [SF] must be greater than the initial entropy [SI] for an irreversible process:


SF > SI (irreversible process)


An example of an irreversible process is the problem discussed in the second paragraph. A hot object is put in contact with a cold object. Eventually, they both achieve the same equilibrium temperature.”

–But if you take a microscope, all you see will be a swarm of molecules colliding at random – there are no rules, no forwards and backwards. Every individual motion that happens can occur in reverse.

By definition, the laws of thermodynamics apply to macroscopic systems, i.e. systems composed of a very high number of particles (the number of molecules in a glass of water, for example, is of the order of 1023). So the second law means that, in the macroscopic realm, time just flows in one direction – the direction in which the entropy increases.

However, this one-way behavior of time seems to break down when we get down to the particle level. This is so because all the relevant laws obeyed by individual particles remain the same when we run time backwards:

#Leggett, Anthony J. (2014): “The arrow of time”. Physics Department, University of Illinois at Urbana-Champaign (retrieved 2023):
https://courses.physics.illinois.edu/phys419/fa2014/Lecture25_TheArrowOfTime.pdf
Quote: “it is generally agreed that with one minor and probably not very important exception (see below), the laws of physics at the microscopic level are invariant under time reversal. This is particularly easy to see in the case of Newtonian mechanics. The first and third laws clearly do not involve the sense of time, so consider the second, “force = mass × acceleration”. What happens if we reverse the reckoning of time – i.e., set t → −t? The “force” on the left-hand side can usually be expressed as the gradient of a potential energy (though see below), and thus does not care about the direction in which time is measured. On the right-hand side, the “mass” clearly is invariant. As to acceleration, this is the rate of change of velocity with time. When time is reversed, the velocity is also, so the acceleration is unchanged. Thus, Newton’s laws, and therefore the whole of Newtonian mechanics, is invariant under time reversal.”


This is intuitively grasped by the fact that, at the microscopic level, all the interactions between molecules are like collisions between individual molecules, all of them can occur with equal probability in any direction. This generally applies to any process involving Newton’s laws on idealized masses or point particles:

#Leggett, Anthony J. (2014): “The arrow of time”. Physics Department, University of Illinois at Urbana-Champaign (retrieved 2023):
https://courses.physics.illinois.edu/phys419/fa2014/Lecture25_TheArrowOfTime.pdf
Quote: “What this means, operationally, is that if we were to show an astronomer from a planet of some distant star a speeded-up movie of the motion of the planets of our own solar system, he would be unable to tell, just from a knowledge of Newtonian mechanics, whether it were being run forward or backward. Similarly, if we imagine an idealized billiard table that is so smooth (and the balls so elastic) that dissipation both in the motion of the balls over the cloth and in their collisions with one another is totally negligible, again we would be unable to tell whether a movie of the processes going on it is being run in the right direction: the time reverse of every process is also a possible process!”


But we perceive a sort of arrow of time that makes things happen in one direction – how does this phenomenon occur? Well, this arrow of time is not actually fundamental, but a matter of probability. When ink molecules spread to fill a glass, there are many  different “slots” of space  they can occupy, and therefore many different possibilities to combine them. And just like your chances of winning the lottery grow the more tickets you have, the probability that ink molecules will end up filling the glass is much higher than the probability that they'll concentrate in just one spot.


The reason why we experience an arrow of time macroscopically lies in the statistical definition of entropy; i.e. in its definition in terms of the microscopic constituents of a system. Such a definition was proposed in the 19th century by the Austrian physicist Ludwig Boltzmann and quantifies the intuitive –and imprecise– definition of “disorder” given above. 


Given a macroscopic system, like a gas in a room or a mixture of water and ink in a glass, the key ingredient we need to understand entropy is the concept of “number of microstates”. This is the number of different ways in which we can arrange the molecules of the system so that, macroscopically, we couldn’t tell the difference. 


As a simple example, consider 4 molecules in a box divided in two smaller boxes: box 1 and box 2. Our possible “macrostates” would be five: (a) having all the molecules in box 1; (b) having most of them in box 1 and some of them in box 2; (c) having all the molecules uniformly distributed between box 1 and box 2;  (d) having some molecules in box 1 but most of them in box 2; and (e) having all the molecules in box 2. Given one of these macrostates, the number of microstates will be given by the number of different ways in which we can arrange the individual molecules: 


#University of Wisconsin-Madison: “Microstates and Entropy”. UW-Madison Chemistry 103/104 Resource Book (retrieved 2023)
https://wisc.pb.unizin.org/chem103and104/chapter/microstates-and-entropy-m17q2/

As we see, the more “disordered” –i.e. uniform– our macrostate is, the more microstates are associated with it.


Given a macrostate and the number W of microstates associated with it, the entropy S is defined as (proportional to) the natural logarithm of W,

S = k logW 


where k is a universal constant known as “Boltzmann constant” and whose value is approximately given by

k = 1.4·10−23 J/K


#Encyclopaedia Britannica: “Entropy and disorder” (retrieved 2023)
https://www.britannica.com/science/principles-of-physical-science/Entropy-and-disorder
Quote: “Thermodynamic entropy is a numerical measure that can be assigned to a given body by experiment; unless disorder can be defined with equal precision, the relation between the two remains too vague to serve as a basis for deduction. A precise definition is to be found by considering the number, labeled W, of different arrangements that can be taken up by a given collection of atoms, subject to their total energy being fixed. [...] Boltzmann and Gibbs, along with Max Planck, established that the entropy, S, as derived through the second law of thermodynamics, is related to W by the formula S = k ln W, where k is the Boltzmann constant (1.3806488 × 10−23 joule per kelvin) and ln W is the natural (Naperian) logarithm of W.”

From the formula S = k log W we see that systems with a higher number of microstates will have a higher entropy. Therefore, the fact that entropy grows with time means that physical systems will naturally evolve towards those macrostates that have associated a higher number of microstates – i.e. towards “more disordered” configurations, as illustrated by the simple example shown above.


This reconciles the inexistence of an arrow of time at the molecular level with its existence at the macroscopic level. In the microscopic realm, molecules move at random, showing no preferred direction. However, such a random motion means that they are constantly exploring all possible microstates and, as a consequence, they will spend most of the time in the macrostate with the highest number of associated microstates.

For example, in the simple example with four molecules considered above (and assuming a uniform distribution over time), the fraction of time that the system will spend in each macrostate would be:

Macrostate Fraction of time spent in that macrostate


(a) 1/16 ~ 6%
(b) 4/16 ~ 25%
(c) 6/16 ~ 38%
(d) 4/16 ~ 25%
(e) 1/16 ~ 6%


i.e. the system spends most of the time in the macrostate with the higher entropy.



–So it is not that the ink forming a drop again is forbidden by the laws of physics, it is just extremely unlikely. To see it you’d have to wait about 10100,000,000,000,000,000,000,000 years – a 1 followed by one hundred sextillion zeros.


For simplicity let’s start considering a room full of gas. Our four-molecule example might lead us to think that, although less likely, sometimes it should be possible to see all the gas molecules concentrated in just one side of the room. And although this is possible in principle, in practice it is not, due to the astronomically high number of molecules involved.


To see it, one has to take into account that an ordinary macroscopic system has a really huge number of molecules. The concrete figure will vary from system to system, but as a representative number for a common system, one can always take Avogadro’s number, which is of the order of 1023.


#Encyclopaedia Britannica: “Avogadro’s law” (retrieved 2023)

https://www.britannica.com/science/Avogadros-law 

Quote: “The specific number of molecules in one gram-mole of a substance, defined as the molecular weight in grams, is 6.02214076 × 1023, a quantity called Avogadro’s number, or the Avogadro constant. For example, the molecular weight of oxygen is 32.00, so that one gram-mole of oxygen has a mass of 32.00 grams and contains 6.02214076 × 1023 molecules.”


Such a huge number of molecules means that the number of microstates associated with any ordinary system will be unimaginably huge:

#Encyclopaedia Britannica: “Entropy and disorder” (retrieved 2023)

https://www.britannica.com/science/principles-of-physical-science/Entropy-and-disorder  

Quote: “[W] is so vast for objects of everyday size as to be beyond visualization; for the helium atoms contained in one cubic centimetre of gas at atmospheric pressure and at 0 °C the number of different quantum states can be written as 1 followed by 170 million million million zeroes (written out, the zeroes would fill nearly one trillion sets of the Encyclopædia Britannica).”


(In powers of ten, a 1 followed by 100 million million million zeros is 10 to the power of 1020, i.e. 10100,000,000,000,000,000,000.)


For a room full of gas, let’s try to make a rough estimate of how long it would take for all the molecules to spontaneously gather in just one half of the room. One possibility is to take the above flour-molecule example and do an exercise of combinatorics with 1023 molecules. If each molecule has a probability of 1/2 to find itself in one half of the room, then the probability for all the 1023 molecules to be in that same half of the room should be: 


P ~ (1/2)100,000,000,000,000,000,000,000

Another way to calculate this probability is to start with the entropy formula of an ideal gas:

#Grimus, Walter (2013): “100th anniversary of the Sackur–Tetrode equation”. Annalen der Physik, vol. 525
https://onlinelibrary.wiley.com/doi/10.1002/andp.2013007

where k is Boltzmann constant, N is the number of molecules (which for the formula above to be valid has to be a large number), E is the total energy of the gas (which only depends on the number of molecules and on the temperature, both of them we’re keeping constant), and s0 is a constant specific for each gas whose concrete value will be irrelevant in what follows. This formula means that, for a gas consisting of N molecules occupying a volume V, the entropy can be written as


S = k log [ exp(s0) (E/N)3/2 (V/N) ] N = k log [ constant·V ] N 


Therefore, the number W of microstates associated with a gas of N molecules in a volume V is:


W(N,V) = (constant·V) N


When the gas gathers at one side of the room it occupies half of the volume, V/2, so the number of microstates is:

W(N,V/2) = (constant·V/2) N


As a consequence, the ratio between the number of microstates available to the gas in each situation is given by:

W WHOLE ROOM / W ONE SIDE = 2N


If the number N of molecules is of the order of the Avogadro number (~ 1023), we recover the same probability that we obtained above from our simple combinatorial argument.

The ratio above will be a number with about 1023 digits, i.e. of the order of 10 to the power 1023, or 10100,000,000,000,000,000,000,000. (Interestingly, the answer wouldn’t have been too different if, instead of 1/2 of the room, we had computed the fraction of microstates associated with the molecules filling 99 hundredths of the room or even 999,999 millionths of the room – we would still have obtained an amazing power of 10.)

Assuming a uniform distribution of the microstates over time, the result above basically implies that the gas will only spend 1/10100,000,000,000,000,000,000,000 of the time in just one half of the room. What does this mean in terms of the time we would have to wait to see that happen? If the typical transition time from microstate to microstate is given by ∆t, then the average time T we should wait to see the gas in one side of the room would be:


T  = 10100,000,000,000,000,000,000,000t

Now we could say that ∆t should be of the order of femtoseconds (10–15 s), since this is a typical time for processes in molecular dynamics. However, the actual value of ∆t is totally irrelevant. ∆t could be femtoseconds (10–15 s), nanoseconds (10–9 s), years (~ 107 s), centuries (~ 109 s) or even the age of the universe (~1017 s) – the prefactor is so amazingly huge that any reasonable choice of ∆t will basically give the same result. In other words: 10100,000,000,000,000,000,000,000 years is, to an amazing degree of accuracy, the same time span as 10100,000,000,000,000,000,000,000 femtoseconds:


10100,000,000,000,000,000,000,000 yr ≈ 10100,000,000,000,000,000,000,000 · 1022 fs
= 10100,000,000,000,000,000,000,000 + 22 fs
≈ 10100,000,000,000,000,000,000,000 fs.


This is so because the difference between a year and a femtosecond (a factor of 1022) is extremely tiny compared to the prefactor (10100,000,000,000,000,000,000,000). The validity of the approximation we’ve made above can be visualized by imagining giving 22 dollars to someone whose personal fortune amounts to one hundred sextillion dollars. For any imaginable purpose, after our gift we can still say that their fortune continues to be one hundred sextillion dollars.


Therefore we can safely say that, to see all the molecules of our gas gathering in one half of the room, we should wait about 10100,000,000,000,000,000,000,000 years. 


To keep calculations simple, here we’ve considered the case of an ideal gas. In the case of our glass full of ink, the expressions for the entropy and the relative volumes would be somewhat different, but the result wouldn’t change qualitatively, since all details are overtaken by powers of the number of molecules, which is still of the order of the Avogadro number.


This result is a general one. If we have a system in equilibrium with entropy S, fluctuations into a configuration with lower entropy S′ can occur but will only do so after a typical time of the order of

Tr  ~ exp[ (SS′)/k ] ,

where k is the Boltzmann constant:


#Dyson, Lisa et al. (2002): “Disturbing Implications of a Cosmological Constant”. Journal of High Energy Physics, vol. 2002.

https://iopscience.iop.org/article/10.1088/1126-6708/2002/10/011/meta

Quote: “The typical time for a fluctuation to occur is of order 


Tr ~ exp(SS′) ,

where S is the equilibrium entropy and S′ is the entropy of the fluctuation.”

(the formula above uses units in which k = 1). Using the entropy formulas given above, where S is the entropy of the gas filling the whole room and S′ is the entropy of the gas filling just one side, we immediately obtain a fluctuation time of the order of


Tr ~ 2N ~ 10100,000,000,000,000,000,000,000 ,


where, as emphasized above, units are irrelevant.



–If you had this much time to spare, eventually by pure random chance, you’d see a red blob form again. Actually, with enough time, you could see any shape forming. Like, for example, a small, red, soggy brain. 


As emphasized above, the random motions of the molecules will explore all possible microstates – in this case, all possible ink configurations. The only difference between microstates is the fraction of the total time that the system will spend in each of them. But if we had such a hyper-astronomical amount of time ahead, we’d eventually see all possible microstates emerge.



–But the universe is not a static glass, it seems to be getting bigger at an ever increasing speed because of dark energy.

Astronomers have known for quite some time that the universe is expanding – a behavior that clearly sets an arrow of time. However, until recently, they were not sure about where such an arrow of time would take us. Would the universe expand forever? Or would the gravitational pull between galaxies eventually stop the expansion, maybe even leading to a recollapse of the universe?


But a couple of decades ago astronomers discovered that, some 5 billion years ago, the expansion of the universe began to accelerate. Such an acceleration has remained until now and it will continue in the future. That means that the universe is not only getting bigger, but doing so at an increasing speed with every passing second. This strange acceleration happens because space itself seems to be infused with a mysterious substance dubbed “dark energy”:


#ESA (2020): “What is dark energy?”. ESA Science & Technology/Euclid (retrieved 2023)
https://sci.esa.int/web/euclid/-/what-is-dark-energy- 

Quote: “Dark energy is an unidentified component of the Universe that is thought to be present in such a large quantity that it overwhelms all other components of matter and energy put together. According to the most recent estimates from ESA's Planck mission, dark energy contributes 68 percent of the matter-energy density of the Universe.


One way to envisage dark energy is that it seems to be linked to the vacuum of space. In other words it is an intrinsic property of the vacuum. So, the larger the volume of space, the more vacuum energy (dark energy) is present and the greater its effects. [...] 


By 1998, the two teams had their results and instead of the expected deceleration, both had found that the expansion was accelerating. This was completely unexpected because nothing in known physics was capable of producing this effect. In keeping with the naming of the mysterious dark matter, astronomers began referring to whatever was causing the acceleration as dark energy.”

–Basically, everything in it is getting more and more diluted. In about 100 trillion years, the last star will die. Then few interesting things will happen for the next few decillions, vigintillions and googols of years. Eventually the universe will be a dark place fully dominated by dark energy – a rapidly expanding ball of pure space almost devoid of matter.


As space gets bigger, matter and ordinary energy get more and more diluted and affect less and less the evolution of the universe. However, dark energy does the opposite: since dark energy is inherent to space itself, more expansion creates more space and, with it, more dark energy. This accelerates even more the expansion of the universe and further diluting the matter, creating a sort of “runaway effect” over cosmic timescales.


It is thought that after some 100 trillion years, the “stelliferous era” (“star-forming age”) will come to an end:

#Busha, Michael T. et al. (2003): “Future Evolution of Cosmic Structure in an Accelerating Universe”, The Astrophysical Journal, Volume 596
https://iopscience.iop.org/article/10.1086/378043/meta

And since the longest lived stars have a much lower expected lifespan (about 17 trillion years, see table above), this means that the last star will also die in about 100 trillion years from now. 


Structures will continue to dilute and break down. Some of the last objects in this dying universe will be black holes, but not even these will last forever. It is a well-known fact that, due to quantum effects, all black holes emit radiation as if they were a hot body at a (generally very tiny) finite temperature. This was famously discovered by Stephen Hawking in the 1970s, and the corresponding black hole radiation is since then known as “Hawking radiation”. This constant emission of particles implies that any isolated black hole will slowly lose mass until eventually disappearing, a process known as “black hole evaporation”:

#Landau, Elizabeth (2019): “10 Questions You Might Have About Black Holes”. NASA, Solar System Exploration (retrieved 2023)
https://solarsystem.nasa.gov/news/1068/10-questions-you-might-have-about-black-holes/
Quote: “Can black holes get smaller? – Yes. The late physicist Stephen Hawking proposed that while black holes get bigger by eating material, they also slowly shrink because they are losing tiny amounts of energy called "Hawking radiation."


Hawking radiation occurs because empty space, or the vacuum, is not really empty. It is actually a sea of particles continually popping into and out of existence. Hawking showed that if a pair of such particles is created near a black hole, there is a chance that one of them will be pulled into the black hole before it is destroyed. In this event, its partner will escape into space. The energy for this comes from the black hole, so the black hole slowly loses energy, and mass, by this process.


Eventually, in theory, black holes will evaporate through Hawking radiation. But it would take much longer than the entire age of the universe for most black holes we know about to significantly evaporate. Black holes, even the ones around a few times the mass of the Sun, will be around for a really, really long time!”


The time it takes for a black hole to completely evaporate grows with the black hole mass. But after a time of about a googol years (10100 years), even supermassive black holes of tens of billions of solar masses will have evaporated:

#Toth, Viktor T: “Hawking Radiation Calculator” (used 2023)
https://www.vttoth.com/CMS/hawking-radiation-calculator

Rounding off some orders of magnitude, a partial timeline of events in the long term would look something like this:

1010 years
Today


1014 years
Last stars die. The remaining objects will be white dwarfs, brown dwarfs, neutron stars and black holes.


1017 years

White dwarfs cool to black dwarfs


1020 years

Galaxies “disintegrate” (most of the stellar corpses are either swallowed by the supermassive black hole at the center or fly away from their home galaxies)


1023 years

Galaxy clusters disintegrate


1067 years

Stellar-mass black holes evaporate


10100 years

Supermassive black holes evaporate


#John Baez (2016): “The End of the Universe”. University of California, Riverside (retrieved 2023)
https://math.ucr.edu/home/baez/end.html 

Quote: “In about 1014 years, all normal star formation processes will have ceased, and the universe will have a population of stars consisting of about 55% white dwarfs, 45% brown dwarfs and a small number of neutron stars and black holes. [...] The white dwarfs will cool to black dwarfs with a temperature of at most 5 Kelvin in about 1017 years, and the galaxies will boil away by about 1019 years. [...] In about 1023 years the dead stars will actually boil off from the galactic clusters, not just the galaxies, so the clusters will disintegrate.”


So in the very long term, basically nothing will be left and the universe will essentially be empty space dominated by dark energy. Today, the most accepted explanation for that energy is that it corresponds to what physicists call “cosmological constant”, a term originally introduced by Einstein in his equations of general relativity and commonly denoted by the greek letter Λ (“lambda”):

#ESA (2020): “What is dark energy?”. ESA Science & Technology/Euclid (retrieved 2023)

https://sci.esa.int/web/euclid/-/what-is-dark-energy-  

Quote: “Now almost a quarter of a century after its discovery, understanding the acceleration remains one of the most compelling challenges of cosmology and fundamental physics. The precise nature of dark energy continues to remain mysterious. The best working hypothesis is something that Albert Einstein suggested back in 1917. Shortly after he published the General Theory of Relativity, his description of the gravity and the Universe on its largest scales, Einstein introduced the 'cosmological constant' into his calculations. The cosmological constant is an energy field that is present across the entire Universe.”


This model of the universe is known as the “ΛCDM model” (where CDM stands for “cold dark matter”) and currently provides the best theoretical description of all observed cosmological phenomena. Since dark energy (or the cosmological constant) is inherent to space itself, it can also be interpreted as the energy intrinsic to the vacuum. From now on, the terms “dark energy”, “cosmological constant” and “vacuum energy” will therefore be used as synonyms.


A universe without matter and dominated by a cosmological constant is known as a “de Sitter universe”, after the Dutch astronomer Willem de Sitter (1872-1934). So our best cosmological model predicts that, in the very long run, our universe will become a de Sitter universe:

#Dyson, Lisa et al. (2002): “Disturbing Implications of a Cosmological Constant”. Journal of High Energy Physics, vol. 2002.

https://iopscience.iop.org/article/10.1088/1126-6708/2002/10/011/meta    

Quote: “The conventional view is that the universe will end in a de Sitter phase with all matter being infinitely diluted by exponential expansion.”


#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “If the vacuum energy remains constant, the universe asymptotically approaches a de Sitter phase.”



You might think that this would lead to the ultimate death of everything but dark energy has one last surprise for you. In a universe dominated by dark energy, space expands so dramatically that it creates a "cosmic horizon" around you: a border beyond which nothing will ever be able to reach you, not even light. So for every practical purpose, the universe has become a glass of finite size about 36 billion light-years wide, surrounded by an impassable cosmic horizon. Such a universe glass is basically a giant black hole turned inside out.


In a universe dominated by dark energy, space expands so dramatically that it creates a "cosmic horizon" around any given observer: a border beyond which not even light can reach the observer. As a rough analogy, we can use the following example: if someone located past the horizon aimed a laser at us, the photons would be like fish trying to swim upstream in a high-speed river. They would advance with respect to the water they are in, but the water itself downstreams so fast that they cannot move forwards.

This cosmic horizon is very similar to a black hole horizon (since light from the other side cannot cross it) but has the shape of a sphere surrounding the observer – this is what is meant above by “a black hole turned inside out”:


#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “Like a black hole, de Sitter space has a horizon; unlike a black hole, we think of ourselves as being surrounded by the horizon, rather than looking at it from outside. Objects that are sufficiently far away (roughly the Hubble radius, H0−1 ~ 1010 light-years) cannot send signals to us even at the speed of light, since the space in between is expanding too rapidly.”


Although the universe as a whole is expanding, the cosmic horizon doesn’t grow: it remains at a fixed distance from the observer, set by the value of the cosmological constant Λ:

#Albrecht, Andreas et al. (2004): “Can the universe afford inflation?”. Physical Review D, vol. 70

https://journals.aps.org/prd/abstract/10.1103/PhysRevD.70.063528  

https://arxiv.org/abs/hep-th/0405270  (open-access version) 

Quote: “Dyson, Kleban, and Susskind [10] consider the case where the current cosmic acceleration is given by a fundamental cosmological constant Λ. In that picture, the universe in the future approaches a de Sitter space, with a finite region enclosed in a horizon [...]. The horizon radius RΛ is given by

RΛ = √(3/Λ) ”


The value of the cosmological constant Λ (or equivalently, the density of dark energy) has been measured by astronomers in recent years and it has been found to be of the order of 10–52 m–2


#Haber, Howard (2015): “The Cosmological Constant Problem”. Santa Cruz Institute for Particle Physics, University of California Santa Cruz (retrieved 2023).
http://scipp.ucsc.edu/~haber/ph171/CosmoConstant15.pdf 

Using this value for Λ and the above formula for the radius of the cosmic horizon, we get

RΛ = √(3/Λ) ≈ 1.7·1026 m ≈ 18 billion light-years

Since the region outside the horizon is disconnected from the inside and cannot affect it physically in any way, for all practical purposes our future universe has become a “spherical box” with a diameter of 36 billion light years. 


#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll   

https://arxiv.org/abs/1702.00850  (open-access version)

Quote:If the vacuum energy remains constant, the universe asymptotically approaches a de Sitter phase. Such a spacetime resembles, in certain ways, a box of gas in equilibrium.”


Moreover, since all matter has diluted completely and no more changes in the constitution of the universe are expected to happen, this state will be eternal:


#Dyson, Lisa et al. (2002): “Disturbing Implications of a Cosmological Constant”. Journal of High Energy Physics, vol. 2002.

https://iopscience.iop.org/article/10.1088/1126-6708/2002/10/011/meta   

Quote: “These assumptions–together with the existence of a final cosmological constant–imply that the universe is eternal but finite. Strictly speaking, by finite we mean that the entropy of the observable universe is bounded, but we can loosely interpret this as saying the system is finite in extent.”



We know that, due to quantum effects, all black holes emit a tiny amount of particles – a phenomenon known as "Hawking radiation". And so does our inside out black hole. In the end, this radiation will fill the universe glass with particles again. At this point, so far in the future that giving you a number has no more meaning, we have reached the true final state.


As explained above, all black holes emit a faint radiation. This is also true for our “black hole turned inside-out”, i.e. the cosmic horizon of our future de Sitter universe. This horizon will be radiating particles as if it was a hot body at a very tiny temperature, set by the cosmological constant:

#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll   

https://arxiv.org/abs/1702.00850  (open-access version)

Quote:If the vacuum energy remains constant, the universe asymptotically approaches a de Sitter phase. Such a spacetime resembles, in certain ways, a box of gas in equilibrium, with a de Sitter temperature TdS =  √(Λ/12π 2) ~ 10−33 eV. (I will use units where ħ = c = k = 1 unless explicitly indicated.)


The value above is expressed in units of energy since the calculation is done in “natural units”, in which the fundamental constants ħ = c = k = 1. To express the value in electronvolts (eV) in kelvin, we just have to reinstate the Boltzmann constant by dividing by its value in eV/K, k ≈ 8.6·10–5 eV/K. This gives a temperature of the order of 10–29 K, which for simplicity has been rounded to 10–30 K (the precise value is irrelevant). 



The universe has now become a closed box full of particles at an extremely low but finite temperature. And since they have a temperature, they undergo random motions. Or in other words: a glass filled with water and ink and an infinite amount of time ahead. Things are about to become interesting again.


Our future universe is therefore a gas of particles in a huge box, i.e. a cosmic equivalent of the glass full of ink discussed at the beginning. As such, it will undergo random fluctuations and, over very long timescales, adopt all possible microstates.



–The fluctuating particles are bumping into each other over and over and over again creating every possible combination of particles that is possible. They are like a monkey typing at random on a typewriter. Almost all of the time it types gibberish. But with enough time, sooner or later it will write the first act of Hamlet. And with even more time, the complete works of Shakespeare.

The example of a monkey hitting keys at random on a typewriter is based on a mathematical theorem on probability theory known as “the infinite monkey theorem”. This theorem states that if we have an infinite number of monkeys typing at random on a typewriter (or, equivalently, a single monkey with an infinite amount of time ahead), they will eventually type any given finite string of characters with probability 1. Even more, the given string will reappear an infinite number of times. The example usually chosen to illustrate the result is indeed the complete works of Shakespeare.


#Gut, Allan (2013): “Probability: A Graduate Course”. Springer Texts in Statistics; Springer, New York. 

https://link.springer.com/book/10.1007/978-1-4614-4708-5
Copy at the National Academic Digital Library of Ethiopia: http://ndl.ethernet.edu.et/bitstream/123456789/27047/1/Allan%20Gut_2005.pdf
Quote: “The Monkey and the Typewriter – A classical, more humorous, example states that if one puts a monkey at a typewriter he (or she) will “some day” all of a sudden have produced the complete works of Shakespeare, and, in fact, repeat this endeavor infinitely many times. In between successes the monkey will also complete the Uppsala telephone directory and lots of other texts.

Let us prove that this is indeed the case. Suppose that the letters the monkey produces constitute an independent sequence of identically distributed random variables. Then, by what we have just shown for coins, and extended in the exercises, every finite sequence of letters will occur (infinitely often!) with probability 1. And since the complete works of Shakespeare (as well as the Uppsala telephone directory) are exactly that, a finite sequence of letters, the proof is complete – under these model assumptions, which, of course, can be debated. After all, it is not quite obvious that the letters the monkey will produce are independent of each other .... 


Finally, by the same argument it follows that the same texts also will appear if we spell out only every second letter or every 25th letter or every 37,658th letter.”

 
Of course, the “monkeys” mentioned in the theorem are not supposed to be actual monkeys, but a metaphor for any process able to generate random outputs from a finite set. 


The comparison between the infinite monkey theorem and the class of random particle fluctuations that we are discussing has been made several times, most famously by physicist and astronomer Arthur Eddington almost one hundred years ago:

#Eddington, Arthur (1927): “The Nature of the Physical World: The Gifford Lectures 1927”. Cambridge University Press, 2012

https://www.cambridge.org/de/academic/subjects/physics/history-philosophy-and-foundations-physics/nature-physical-world-gifford-lectures-1927

– If ink in our universe glass generates random arrangements of particles, what could they be? Well, a spontaneous fluctuation could give rise to a planet. Or to a galaxy. Or even to a lot of them. So maybe our universe has already ended and all we see around us is a “pop-up universe” – not a universe that evolved from a Big Bang, but one that fluctuated into existence by pure chance. And that, like the drop of ink, will only exist for a while before dissolving again.


The possibility that the universe we see around (“small regions of the size of our galaxy”, in the quote below) might have emerged as a spontaneous fluctuation in an already dead universe was considered in 1897 by Ludwig Boltzmann. Boltzmann proposed this idea in an effort to reconcile the apparent absolute validity of the Second Law of thermodynamics with the statistical interpretation of entropy (since the latter implied that the Second Law wasn’t really absolute or fundamental, but just a matter of probability):


#Barrow, John D. (1986): The Anthropic Cosmological Principle, Oxford University Press.

https://global.oup.com/academic/product/the-anthropic-cosmological-principle-9780192821478?cc=de&lang=en&

–Being random, such pop-up universes could be similar to ours but with funny glitches. In some of these universes dinosaurs are riding snails. In another, the stars are made from blueberries. Maybe in another you are wearing a funny hat. 


Being the product of a random thermal fluctuation, such “pop-up universes” don’t have to be identical to ours, but could include strange variations, like a hotter cosmic microwave background (CMB). And such “glitch-universes” would be, in fact, much more abundant statistically than universes like ours:


#Dyson, Lisa et al. (2002): “Disturbing Implications of a Cosmological Constant”. Journal of High Energy Physics, vol. 2002.

https://iopscience.iop.org/article/10.1088/1126-6708/2002/10/011/meta 
Quote: “What is worse is that there are even more states which are macroscopically different than our world but still would allow life as we know it. As an example, consider a state in which we leave everything undisturbed, except that we replace a small fraction of the matter in the universe by an increase in the amount of thermal microwave photons. In particular, we could do this by increasing the temperature of the CMB from 2.7 degrees to 10 degrees. Everything else, including the abundances of the elements, is left the same. Naively one might think that this is impossible; no consistent evolution could get to this point since the extra thermal energy in the early universe would have destroyed the fragile nuclei. But on second thought, there must be a possible starting point which would eventually lead to this “impossible” state. To see this, all we have to do is run the configuration backward in time. Either classically or quantum mechanically, the reverse evolution will eventually lead back to a state which looks entirely thermal, but which, if run forward, will lead us back to where we began. In a theory dependent on Poincaré recurrences, there would be many more “events” in which the universe evolves into this modified state. Thus it would b e vastly more likely to find a world at 10 degrees with the usual abundances than in one at 2.7 degrees.”



–Scientists in such universes wouldn't understand those glitches, so maybe the greatest mysteries of physics are just nonsense bugs of our pop-up universe.


Scientists in those universes wouldn’t understand the reason for those “glitches”, like a hotter CMB, and would have to accept them as unlikely statistical coincidences:


#Dyson, Lisa et al. (2002): “Disturbing Implications of a Cosmological Constant”. Journal of High Energy Physics, vol. 2002.

https://iopscience.iop.org/article/10.1088/1126-6708/2002/10/011/meta 
Quote: “All of these worlds would be peculiar. The helium abundance would be incomprehensible from the usual arguments. In all of these worlds statistically miraculous (but not impossible) events would be necessary to assemble and preserve the fragile nuclei that would ordinarily be destroyed by the higher temperatures. However, although each of the corresponding histories is extremely unlikely, there are so many more of them than those that evolve without “miracles,” that they would vastly dominate the livable universes that would be created by Poincaré recurrences. We are forced to conclude that in a recurrent world like de Sitter space our universe would be extraordinarily unlikely.”



–But not all possible fluctuations of our dead universe have the same probability to occur. Smaller fluctuations are much more probable than bigger ones. A planet is more likely than a galaxy. But you know what is even way more likely? A human brain.


This result is an intuitive one: in our dead universe, a fluctuation giving rise to a small system (like a planet) will be more likely than a fluctuation giving rise to a huge system (like a galaxy). A more detailed explanation is given below.



4. Are You Actually Just a Brain?


–You think, therefore you exist. But what else do you truly know? In the end, your brain is just interpreting signals from your senses and creating a world that you experience. So technically, you could be just your brain that thinks the world is real. And if we follow the logic of the ink in the universe glass, in particular, you could be a disembodied brain that, just by chance, emerged in a dead universe with your complete set of knowledge and memories.


As explained above, when we have a system in equilibrium with entropy S, fluctuations into a configuration with lower entropy S′ will occur after a typical time of the order 


Tr  ~ exp[ (SS′)/k ] .

The longer this time, the less probable the corresponding fluctuation. Therefore this expression shows that the probability of a fluctuation is given by S − S′, i.e. the change ∆S in the total entropy of our system. 


Now our equilibrium system is our dead universe: a huge “box” filled with a gas of particles in a state of maximal disorder and at a temperature of TdS ~ 10–30 K. If a fluctuation occurs that gives rise to an ordered subsystem, like a galaxy or a planet, the resulting change in entropy will be given by the number of particles that we have to remove from the gas in order to create the subsystem in question. Therefore a smaller subsystem will imply a smaller change in entropy, and hence a smaller time (i.e. higher probability) for it to appear.

The Boltzmann Brain problem arises when we take this conclusion to its logical extreme: in a dead universe, the most probable fluctuation compatible with our experience as conscious observers is not a whole universe like the one we see, but a single brain that reproduces those observations. The corresponding change in entropy is given by:

#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “For simplicity let us model the universe as a thermal gas of relativistic particles with a temperature TdS ~ 10−33 eV (so the particles would essentially be all photons and gravitons). The entropy of a region of space filled with such a gas is of order the number of particles, and the energy per particle is of order TdS. The entropy decrease when a BB is created is therefore simply the number of thermal particles removed from the gas in order to make it; for a BB of mass M we have

∆S = M / TdS .

If we imagine that a functioning brain needs to include at least Avogadro’s number (6×1023) protons and neutrons, this gives us 


∆S ~ 1066 .” 


From the formula above we see explicitly that the change in entropy is proportional to the mass of the fluctuation. So as advanced above, a fluctuation with a small mass (like a brain) will be more likely than a fluctuation with a high mass (like a galaxy).


We see that the change in entropy associated with the spontaneous creation of a brain is huge, and so will be the typical time for such a fluctuation to appear: 


Tr  ~ exp(1066) ,

where the number is so large that, once again, the choice of units becomes irrelevant. However, since our dead universe has an infinite amount of time ahead, such a fluctuation is guaranteed to happen. In fact, it will happen an infinite amount of times.

In general, such brains would form out of gravitons and photons (the kind of particles that will fill our future de Sitter universe, given its extremely low temperature) and the assembly process would in general be slow:

#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “Here are some salient features of the process by which BBs would fluctuate into existence. [...]



–This is a pretty bizarre idea but if we do the math, it is kind of pretty solid. Let’s compare the number of brains inside bodies in a living universe with the number of naked brains in a dead universe. Let’s go really big and imagine that 100 quadrillion humans will ever live around Earth. And that the same amount of people will live around every star in the universe. If we add this together we get about 1041 brains inside bodies that will exist.


The number of humans that have ever lived on Earth until today has been estimated to be of the order of 100 billion:


#Kaneda, Toshiko (2022): “How Many People Have Ever Lived on Earth?”. Population Reference Bureau (retrieved 2023)

https://www.prb.org/articles/how-many-people-have-ever-lived-on-earth/
Quote: “Calculating the number of people who have ever lived is part science and part art. No demographic data exist for more than 99% of the span of human existence. Still, with some assumptions about population size throughout human history, we can get a rough idea of this number: About 117 billion members of our species have ever been born on Earth.”


Regarding the future, demographic models project that the world population will probably stabilize at around the end of this century. If the world population stabilizes, that means that the number of births per year will equal the number of deaths per year. If we are able to attain a high but sustainable population, the number of deaths is expected to stabilize at about 100 million per year by the end of the century:


#UN, Department of Economic and Social Affairs, Population Division (2022): “World Population Prospects 2022” (retrieved 2023):

https://population.un.org/wpp/Graphs/Probabilistic/MORT/Deaths/900

So let’s assume that, from this century on, about 100 million people will be born every year, and that this trend will continue until the Earth ceases to be habitable. This is expected to happen in about one million years from now:

#K.-P. Schröder (2008): “Distant future of the Sun and Earth revisited”. Monthly Notices of the Royal Astronomical Society, Volume 386

https://academic.oup.com/mnras/article/386/1/155/977315 

Quote: “Certainly, with the 10 per cent increase of solar luminosity over the next 1 Gyr (see previous section), it is clear that Earth will come to leave the HZ [habitable zone] already in about a billion years time, since the inner (hot side) boundary will then cross 1 au. [... ] What will happen on the Earth itself? Ignoring for the moment the short-time-scale (decades to centuries) problems currently being introduced by climate change, we may expect to have about one billion years before the solar flux has increased by the critical 10 per cent mentioned earlier. At that point, neglecting the effects of solar irradiance changes on the cloud cover, the water vapour content of the atmosphere will increase substantially and the oceans will start to evaporate (Kasting 1988). An initially moist greenhouse effect (Laughlin 2007) will cause runaway evaporation until the oceans have boiled dry.”


Therefore, if over the next billion years 100 million people are born yearly, we find that some 1017 humans (100 quadrillion) will ever live on Earth (this figure is orders of magnitude higher than the 1011 humans that have lived until now, so we can neglect this).

Although it’s quite probably not the case, let’s further assume that this number applies to every single star in the observable universe. The number of stars in the observable universe is estimated to be in the order of 1022 or 1024:

#ESA: “How many stars are there in the Universe?” (retrieved 2023)
https://www.esa.int/Science_Exploration/Space_Science/Herschel/How_many_stars_are_there_in_the_Universe 

Quote: “For the Universe, the galaxies are our small representative volumes, and there are something like 1011 to 1012 stars in our Galaxy, and there are perhaps something like 1011 or 1012 galaxies. With this simple calculation you get something like 1022 to 1024 stars in the Universe.”


In such a case, we’d get some 1017·1024 = 1041 conscious observers in the observable universe. As a simple comparison to appreciate the sheer magnitude of such a number, the number of ants on Earth has been estimated in 2·1016:

#P. Schultheiss et al. (2022): “The abundance, biomass, and distribution of ants on Earth”. Proceedings of the National Academy of Sciences, vol. 119
https://www.pnas.org/doi/full/10.1073/pnas.2201550119 

Quote: “Integrating data from all continents and major biomes, we conservatively estimate 20 × 1015 (20 quadrillion) ants on Earth, with a total biomass of 12 megatons of dry carbon.”


This calculation is nothing more than a very rough estimate. The actual number of conscious observers in the universe could be several orders of magnitude higher or lower, but as we will see, the exact figure will be irrelevant.



–However, in a dead universe that has had enough time to explore all possible fluctuations and that will exist forever, the number of naked brains that would emerge is well… infinite. So the probability that you are a floating brain is not only vastly larger than the probability that you are a real human. It is so inconceivably larger that we can’t even meaningfully quantify the difference. How do you compare a number to infinity?


As computed above, the typical time needed for our dead universe to generate a Boltzmann Brain will be of the order of exp(1066) (again, this is such a big number that it doesn’t matter if we measure it in seconds, years, or current ages of the universe). However, the time ahead is infinite, so such fluctuations will emerge an infinite number of times.

A more concrete estimate can be computed if we assume that our de Sitter universe persists at least for a “recurrence time”, i.e. the average time that a finite physical system needs to “repeat itself”. For our de Sitter universe, such a time is of the order of exp(10122):

#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “[...] the recurrence theorem, put forward by Poincar´e in 1890. Starting from any initial state and evolving for a sufficiently long time, any classical Hamiltonian system with a bounded phase space will eventually return arbitrarily closely to the original state [6].”


Quote: “Universe B, meanwhile, we take to persist for the recurrence time appropriate for a system with an entropy given by the de Sitter entropy of our universe, 


tB ~ exp(SdS) ~ exp(10122) s. 


This is an extremely long time. So much so that it doesn’t really matter what units we use to measure it with; the number is essentially the same whether measured in Planck times or Hubble times”


This is the typical time scale characterizing physical changes in our dead universe. In just one recurrence time, the number of Boltzmann Brains that our dead universe will generate is of order:

Number of Boltzmann Brains ~ exp(10122) / exp (1066) = exp(10122 – 1066) ~ exp(10122).


So on the one hand, we have that the number of “real” conscious observers in an evolving universe will be a number somewhere “close” to 1040 (be it 1020 or 10100). However, the number of Boltzmann Brains generated in our dead universe will be at least of the order of exp(10122), which is incommensurably higher. Therefore, if we assume that we are typical observers, we are forced to conclude that the probability that we are “real” observers is negligibly small compared to the probability that we are Boltzmann Brains.



–So… are you a floating brain that exists for one moment in time, then basically forever passes and then you exist for another moment in time? Maybe not even in that order? Maybe your life happens backwards and you just do not notice? Maybe you have lived trillions of times already. Are you the dream of a dead universe? 


Being a Boltzmann Brain would mean that we cannot assure that our recollections (both the events we know or remember and their temporal and spatial ordering) bear any relation to physical reality. As we will see below, this observation lies at the root of a possible solution to the Boltzmann Brain paradox.


5. Really? Like, really?


–Well, probably not. First of all, there are a few loopholes. For example, dark energy could behave completely differently from what we think today and lead us to another future.

Our derivation about the far future of our universe depends on the assumption that dark energy (the mysterious agent accelerating the expansion of the universe) is given by a cosmological constant: a fundamental constant of nature whose value will never change over time or across space. Only in this case will the future universe become a de Sitter universe.

The assumption that dark energy is due to a cosmological constant is at the core of the nowadays most accepted cosmological model: 


#ESA (2020): “What is dark energy?”. ESA Science & Technology/Euclid (retrieved 2023)

https://sci.esa.int/web/euclid/-/what-is-dark-energy-  

Quote: “Now almost a quarter of a century after its discovery, understanding the acceleration remains one of the most compelling challenges of cosmology and fundamental physics. The precise nature of dark energy continues to remain mysterious. The best working hypothesis is something that Albert Einstein suggested back in 1917. Shortly after he published the General Theory of Relativity, his description of the gravity and the Universe on its largest scales, Einstein introduced the 'cosmological constant' into his calculations. The cosmological constant is an energy field that is present across the entire Universe.”


This model is known as “ΛCDM”, where the greek letter Λ stands for the symbol physicists use to denote the cosmological constant, and CDM stands for “cold dark matter”. However, such a model of the universe could well be proven wrong some day. And in fact, several alternative explanations of dark energy have been proposed already:

#ESA (2020): “What is dark energy?”. ESA Science & Technology/Euclid (retrieved 2023)

https://sci.esa.int/web/euclid/-/what-is-dark-energy-  

Quote: “Now cosmologists have re-introduced the cosmological constant because it could be the simplest way to explain the observations. There are alternative suggestions. For example, the acceleration could be produced by a new force of nature or due to a misunderstanding of the way General Relativity works. Each explanation subtly alters the way the acceleration develops across cosmic time but as yet no experiment has been capable of measuring the acceleration in sufficient detail to distinguish between the possible solutions.”


If dark energy is due to a new force of nature that changes behavior with time or to a hitherto unknown behavior of gravity, the far future of the universe will be very different.

Or maybe our dead universe will be too motionless to allow the creation of brains, even with infinite time.

Even if our universe ends up being a de Sitter universe, it might be that such a state is too stationary to allow the creation of brains:

#Boddy, Kimberly K. et al. (2017): “Why Boltzmann Brains do not Fluctuate into Existence from the de Sitter Vacuum”. The Philosophy of Cosmology, Part III - Foundations of Cosmology: Gravity and the Quantum.
https://www.cambridge.org/core/books/abs/philosophy-of-cosmology/why-boltzmann-brains-do-not-fluctuate-into-existence-from-the-de-sitter-vacuum/ADA7A746CF4B316351BE5276FD578213 

https://arxiv.org/abs/1505.02780 (open-access version) 

Quote: “Many modern cosmological scenarios feature large volumes of spacetime in a de Sitter vacuum phase. Such models are said to be faced with a "Boltzmann Brain problem" - the overwhelming majority of observers with fixed local conditions are random fluctuations in the de Sitter vacuum, rather than arising via thermodynamically sensible evolution from a low-entropy past. We argue that this worry can be straightforwardly avoided in the Many-Worlds (Everett) approach to quantum mechanics, as long as the underlying Hilbert space is infinite-dimensional. In that case, de Sitter settles into a truly stationary quantum vacuum state. While there would be a nonzero probability for observing Boltzmann-Brain-like fluctuations in such a state, "observation" refers to a specific kind of dynamical process that does not occur in the vacuum (which is, after all, time-independent). Observers are necessarily out-of-equilibrium physical systems, which are absent in the vacuum. Hence, the fact that projection operators corresponding to states with observers in them do not annihilate the vacuum does not imply that such observers actually come into existence. The Boltzmann Brain problem is therefore much less generic than has been supposed.”

–Or maybe the universe will end up dying in another way. Our understanding of the cosmos is not standing on solid enough feet for anyone to worry if they are real or not.

A last possibility is that something happens that prevents the universe from evolving into a de Sitter universe. Such an event could be a sudden change in the fundamental properties of the universe, something called “vacuum decay”:

#Page, Don N. (2008): “Is our Universe likely to decay within 20 billion years?”. Physical Review D78, 063535.
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.78.063535
https://arxiv.org/abs/hep-th/0610079 (open-access version)
Quote: “Observations that we are highly unlikely to be vacuum fluctuations suggest that our universe is decaying at a rate faster than the asymptotic volume growth rate, in order that there not be too many observers produced by vacuum fluctuations to make our observations highly atypical. An asymptotic linear e-folding time of roughly 16 Gyr (deduced from current measurements of cosmic acceleration) would then imply that our universe is more likely than not to decay within a time that is less than 19 Gyr in the future.”


#Kimberly K. Boddy et al. (2013): “Can the Higgs Boson Save Us From the Menace of the Boltzmann Brains?”, arXiv preprint.
https://arxiv.org/abs/1308.4686
Quote: “The standard ΛCDM model provides an excellent fit to current cosmological observations but suffers from a potentially serious Boltzmann Brain problem. If the universe enters a de Sitter vacuum phase that is truly eternal, there will be a finite temperature in empty space and corresponding thermal fluctuations. Among these fluctuations will be intelligent observers, as well as configurations that reproduce any local region of the current universe to arbitrary precision. We discuss the possibility that the escape from this unacceptable situation may be found in known physics: vacuum instability induced by the Higgs field. Avoiding Boltzmann Brains in a measure-independent way requires a decay timescale of order the current age of the universe, which can be achieved if the top quark pole mass is approximately 178 GeV. Otherwise we must invoke new physics or a particular cosmological measure before we can consider ΛCDM to be an empirical success.”

We’ve explained the phenomenon of vacuum decay in another video:

#Kurzgesagt – In a Nutshell (2016): “The Most Efficient Way to Destroy the Universe – False Vacuum”

https://www.youtube.com/watch?v=ijFm6DxNVyI

 –Loopholes aside, if you were a fluctuating brain, all the laws of physics stored in your brain would have originated at random and shouldn’t bear any relation to the real world. But we just used those laws to prove that you are a floating brain! So even if you believe that you are a floating brain, you’d have to admit that you have no good reason to believe that you are actually a floating brain.


However, there are two main arguments that have been presented as evidence against the idea that we are Boltzmann Brains. Both of them have to do with the kind of knowledge or recollections that we experience as conscious observers.


The first argument could be paraphrased as follows. All our knowledge and personal recollections look extremely “ordered”; that is, they “make sense”. We don't remember having ever seen an inkblot spontaneously forming in a bucket full of colored water, having ever seen the Sun rising in the West, or having seen things falling sometimes upwards and sometimes downwards. The vast majority of our recollections conform to a comparatively tiny set of “laws” which, moreover, are consistent with each other. But if all of our recollections had originated from a random fluctuation, they should also be highly random, i.e. much crazier. This has been taken as evidence that most probably we are not Boltzmann Brains:


#Page, Don N. (2008): “Is our Universe likely to decay within 20 billion years?”. Physical Review D78, 063535.
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.78.063535
https://arxiv.org/abs/hep-th/0610079 (open-access version)
Quote: “Einstein is quoted as saying that the most incomprehensible thing about the world is that it is comprehensible. This mystery has both a philosophical level 1 and a scientific level. The scientific level of the mystery is the question of how observers within the universe have ordered observations and thoughts about the universe. It seems obvious that our observations and thoughts would be very unlikely to have the order we experience if we were vacuum fluctuations, since presumably there are far more quantum states of disordered observations than of ordered ones. Therefore, I shall assume that our observational evidence or order implies that we are not vacuum fluctuations.”


This argument states that, in a fluctuating universe, most observers would have a “disordered” experience, not an ordered one. Therefore, if we are typical observers, we should have random recollections.


However, this doesn’t answer the actual question we want to address, which could be restated as follows: Given the fact that our experience is (mostly) ordered, what is the probability that we are a Boltzmann Brain against an ordinary observer? The problem is that, in a randomly fluctuating universe, most observers with ordered experiences will be Boltzmann Brains, not ordinary observers. (To use an analogy in terms of the typing monkeys, it’s not enough to state that most monkeys are typing gibberish instead of Hamlet. The actual question we’d like to answer would be: “Given the fact that I am typing Hamlet, is it more likely that I am Shakespeare or a monkey?”.)

A more convincing argument against the possibility of being Boltzmann Brains is given by the fact that this possibility is sort of “self-undermining”. If we believe that we are Boltzmann Brains, we should also believe that our physics knowledge has originated at random. But in such a case, such knowledge would have almost nothing to do with the actual past of the universe and the actual laws of physics. But the reasoning that led us to conclude that we should be Boltzmann Brains was based on those laws. Therefore, the conclusion that we are Boltzmann Brains can’t be true and reasonably believed at the same time. This argument has been called “cognitive instability”:


#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “The randomly-fluctuating universe scenario is therefore self-undermining, or as Albert has characterized similar situations in statistical mechanics, cognitively unstable [14, 3, 73]. If you reason yourself into believing that you live in such a universe, you have to conclude that you have no justification for accepting your own reasoning. You cannot simultaneously conclude that you live in a randomly-fluctuating universe and believe that you have good reason for concluding that. In (what most of us believe to be) the real world, in which there really was a low-entropy Big Bang in the relatively recent past, our memories and deductions about the past rely on a low-entropy Past Hypothesis for their justification. In a universe dominated by Boltzmann fluctuations, such an hypothesis is lacking, and we can’t trust anything we think we know.”



–Ok. So this hallucinatory trip might teach us something about our theories about the universe. But in the end it is just a really weird exercise in what you can do with physics. An exercise of what brains in bodies are able to think about. 


The main goal of the research about Boltzmann Brains taking place in the last years hasn’t been to find out if we are Boltzmann Brains or not. Rather, the Boltzmann Brain “prediction” has been used as a resource to explore the internal consistency and predictive power of our best cosmological theories. Given the cognitive instability problem mentioned above, most cosmologists will discount any model that predicts we probably should be Boltzmann Brains:

#Carroll, Sean (2020): “Why Boltzmann Brains Are Bad”. Dasgupta Shamik et al. (eds), Current Controversies in Philosophy of Science, Routledge.

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315713151-3/boltzmann-brains-bad-sean-carroll  

https://arxiv.org/abs/1702.00850  (open-access version)

Quote: “We therefore conclude that the right strategy is to reject cosmological models that would be dominated by Boltzmann Brains (or at least Boltzmann Observers among those who have data just like ours), not because we have empirical evidence against them, but because they are cognitively unstable and therefore self-undermining and unworthy of serious consideration. If we construct a model such as ΛCDM or a particular instantiation of the inflationary multiverse that seems to lead us into such a situation, our job as cosmologists is to modify it until this problem is solved, or search for a better theory. This is very useful guidance when it comes to the difficult task of building theories that describe the universe as a whole.”