Yuta Oshima, Masahiro Suzuki, Yutaka Matsuo, Hiroki Furuta
NeurIPS 2025
The remarkable progress in text-to-video diffusion models enables the generation of photorealistic videos, although the content of these generated videos often includes unnatural movement or deformation, reverse playback, and motionless scenes. Recently, an alignment problem has attracted huge attention, where we steer the output of diffusion models based on some measure of the content's goodness. Because there is a large room for improvement of perceptual quality along the frame direction, we should address which metrics we should optimize and how we can optimize them in the video generation. In this paper, we propose diffusion latent beam search with lookahead estimator, which can select a better diffusion latent to maximize a given alignment reward at inference time. We then point out that improving perceptual video quality with respect to alignment to prompts requires reward calibration by weighting existing metrics. This is because when humans or vision language models evaluate outputs, many previous metrics to quantify the naturalness of video do not always correlate with the evaluation. We demonstrate that our method improves the perceptual quality evaluated on the calibrated reward, VLMs, and human assessment, without model parameter update, and outputs the best generation compared to greedy search and best-of-N sampling under much more efficient computational cost. The experiments highlight that our method is beneficial to many capable generative models, and provide a practical guideline: we should prioritize the inference-time compute allocation into enabling the lookahead estimator and increasing the search budget, rather than expanding the denoising steps.
Diffusion latent beam search (DLBS) seeks a better diffusion path over the reverse process; sampling K latents per beam and possessing B beams for the next step, which helps explore the latent paths robustly.
Lookahead estimator notably reduces the noise at latent reward evaluation by interpolating the rest of the time steps from the current latent with T' steps deterministic DDIM.
DLBS achieves much better computational-efficiency than best-of-N (BoN), as achieving higher performance gains under the same execution time.
LA estimator (DLBS-LA) could remarkably boost efficiency only with marginal overhead on top of DLBS.
Comparison of text-to-video results between DLBS-LA, base models, and other sampling methods on SoTA models (Latte, CogVideoX, and Wan 2.1). DLBS-LA produces more dynamic, natural, and prompt-aligned videos than all baselines.
Prompt: Under a rainbow, a zebra kicks up a spray of water as it crosses a fast-flowing river.
Latte
+ GS (KB=32)
+ DLBS (KB=32)
+ DLBS-LA (KB=8, T'=6)
Prompt: dog puts paws together
Latte
+ BoN (KB=32)
+ DLBS (KB=32)
+ DLBS-LA (KB=8, T'=6)
Prompt: Two dogs chase each other, suddenly skidding around a sharp corner.
CogVideoX-5B
+ DLBS-LA (KB=8, T'=6)
Prompt: A person on a hoverboard colliding with a wall, the board stopping abruptly.
Wan 2.1-14B
+ DLBS-LA (KB=8, T'=6)
For more qualitative results for inference-time text-to-video alignment, please take a look at the Gallery.
We perform pairwise comparisons between DLBS-LA (KB=8, T’=6, NFE=2500) and BoN (KB=64, NFE=3200).
The results confirm that, whatever models or prompts we choose, the quality of content generated by DLBS-LA consistently outperforms that of a baseline despite requiring fewer NFEs.
We then point out that the improvement of perceptual video quality, considering the alignment to prompts, requires reward calibration of existing metrics. When evaluating outputs using capable vision language models or human raters, many previous metrics for quantifying video naturalness do not always correlate with them. Optimal reward design for measuring perceptual quality highly depends on the degree of dynamics described in evaluation prompts. We design a weighted linear combination of multiple metrics, which is calibrated to perceptual quality and improves the correlation with VLM/human preference.
We select the best coefficients among brute-force candidates, based on the correlation with Gemini, for each set of prompts with a different dynamics grade. Prompts with a high dynamics grade, i.e., DEVIL-high, place greater weight on the dynamic degree. In contrast, prompts that describe slight motion, i.e., DEVIL-medium and DEVIL-static, place a smaller weight on it.
We select the video with the highest reward out of 64 randomly generated candidates for each prompt, drawn from DEVIL-high, DEVIL-medium, DEVIL-static, and MSRVTT-test.
Videos chosen using VLM-calibrated rewards achieve a more balanced quality compared to those relying on any single metric.
DEVIL-high
Prompt: A storm sweeps an elephant into a raging river, carrying it away swiftly.
Motion Smoothness (lack of motion)
Dynamic Degree (prompt misalignment)
Aesthetic Quality (lack of motion)
Gemini Calibrated Reward
DEVIL-medium
Prompt: Macao beach with stone mountains aerial view from drone. travel destination. summer vacation. dominican republic
Subject Consistency (lack of motion)
Imaging Quality (prompt misalignment)
Text-Video Consistency
GPT Calibrated Reward
DEVIL-static
Prompt: black car is under the blue sign.
Motion Smoothness (lack of motion, prompt misalignment)
Dynamic Degree (lack of consistency, prompt misalignment)
Aesthetic Quality (prompt misalignment)
Gemini Calibrated Reward
MSRVTT-test
Prompt: a yellow-haired girl is explaining about a game
Subject Consistency / Motion Smoothness (lack of motion)
Dynamic Degree (lack of consistency)
Imaging Quality (prompt misalignment)
GPT Calibrated Reward