In machine learning, density models allow us to estimate the training data distribution, but they do not reason about the dynamics of the system we aim to control. In contrast, in control theory, Lyapunov functions provide a mechanism for making long-horizon guarantees about the stability of a system, but are unrelated to any data or distribution. LDMs combine the data-aware aspect of density models with the dynamics-aware aspect of Lyapunov functions, in order to ensure that a system controlled by a learning-based policy is guaranteed to remain in-distribution over a long horizon.