(If the equations in this page are not rendering correctly, just refresh your browser a few times)
The Mixing Length Theory (MLT; Bohm-Vitense 1958; see Joyce & Tayar 2023 for a recent review) is a simplified model used to describe convection in stars. It is based on the idea that convective cells transport energy by moving hot material upwards and cool material downwards, similar to how fluid rises in a boiling pot of water (before the phase transition occurs). In MLT, convection is characterized by the concept of a mixing length, denoted by l, which represents the characteristic length over which a convective cell travels and exchanges its thermal content with the surrounding medium. The mixing length does not necessarily correspond to a true physical distance, but rather is a free parameter in the theory.
The αMLT parameter is a dimensionless factor introduced in MLT to determine the mixing length. It is typically defined as the ratio of the mixing length l to the local pressure scale height HP:
where the pressure scale height HP is the length scale over which the pressure changes significantly (a factor of e) in the stellar atmosphere, i.e. HP = |d ln r/d ln P|.
In this way, the αMLT parameter scales how efficient convective energy transport is within a star. The higher the value of αMLT , the longer a characteristic length a fluid parcel is thought to travel, and consequently, the more flux it can transport.
There is no purely theoretical basis for what αMLT should be; rather it must be calibrated. In the literature, this has been done either by comparing detailed models with observed properties of stars, such as their luminosity, effective temperature, and surface abundances, or by calibration of MLT to 3D simulations of convection in stellar envelope models. For most stars and convective regimes, these procedures result in the best match for αMLT~1.5-4.
The global thermal or Kelvin-Helmholtz timescale tKH is the timescale over which a star contracts gravitationally and releases its internal energy content through radiation, at the rate of its surface luminosity L:
where G=6.6743 * 10-8 cm3 s-1 g-2 is the gravitational constant, M is the mass of the star, R the radius and L the luminosity, expressed in solar units (M☉ , R☉ and L☉) after the second equality.
The Kelvin-Hemlholtz timescale also describes how fast changes in the thermal structure of a star can occur, representing therefore, to first approximation, the timescale on which a star reacts when its thermal equilibrium is perturbed. This situation can be the consequence of mass loss: directly following an episode of mass loss, the donor star loses both its hydrostatic and thermal equilibria. The star initially restores its hydrostatic equilibrium very rapidly (on a dynamical timescale). After that, the star attempts to recover the thermal equilibrium radius appropriate for its new mass on the thermal timescale.
The local thermal timescale tth at mass coordinate m is the time it would take the star to radiate away all the thermal energy content of the layers above m at the rate of its surface luminosity L:
where for each mass coordinate m, cP(m) represents the local specific heat at constant pressure and T(m) is the local temperature. Along the star's profile, the local thermal timescale is expected to decrease from the core to the surface; this is just a consequence of the fact that the timescale to radiate away thermal energy from a certain depth upwards is proportional to the depth itself. Along a star's evolutionary history, one expects the local thermal timescale profile to shift to lower values, as the star expands and the envelope properties change.
Crucially, the local thermal timescale tth and the global Kelvin-Helmholtz timescale tKH are two very different quantities. By assuming that the changes in the star's thermal structures can occur on the global tKH timescale, you are also implying that the whole structure can react in the same way, more specifically, you are implying a fully adiabatic response. By considering the local timescale tth, instead, you are in principle capturing how the properties of the envelope (e.g. the efficiency of convection, the degree of ionization, the specific energy content) can determine the thermal response in each layer of the star.
Why is this relevant? In general, there is always going to be a region just below the stellar photosphere where the adiabatic approximation fails, that is where the energy density in gas falls below that in radiation, and radiative relaxation becomes even faster than the local dynamical timescale. Therefore, even on timescales much shorter than the global thermal timescale, stars do not necessarily react to equilibrium perturbations fully adiabatically, since surface layers respond more rapidly than the star as a whole.
To both the global Kelvin-Helmholtz and local thermal timescale, we can associate a correspondent thermal mass loss rate. The rate ṀKH of mass loss that can occur within the global thermal timescale tKH of a star of mass M is given by
On the other hand, the minimum mass loss rate Ṁth required to strip away the outer layers of a star down to this mass coordinate m faster than these layers can thermally readjust can be defined as
These quantities naturally represent a limit on the rate Ṁ, during a sudden mass loss episode, for the star to be able to thermally adjust globally or locally, respectively. In literature, this limit has often been translated into a criterion on (in)stability of a mass loss episode, in particular during mass transfer interaction in binary systems, with important consequences on any prediction on post-mass transfer properties!
Now that we've discussed the physics, please navigate to Minilab 2 Task 1: The Kelvin-Helmholtz timescale for MESA tasks