StereoDiff: Stereo-Diffusion Synergy for Video Depth Estimation

arXiv 2024

1University of Pennsylvania 2HKUST(GZ) 3University of Hong Kong 4Apple
Corresponding author.
Teaser 1

StereoDiff excels in delivering remarkable global and local consistency for video depth estimation. In terms of global consistency, StereoDiff achieves highly accurate and stable depth maps on static backgrounds across consecutive windows, leveraging stereo matching to prevent the abrupt depth shifts often seen in DepthCrafter, where depth values on static backgrounds can vary significantly between adjacent windows. For local consistency, StereoDiff yields much smoother, flicker-free depth values across consecutive frames, especially in dynamic regions. In contrast, MonST3R suffers from frequent, pronounced flickering and jitters in these areas.

Abstract

Recent video depth estimation methods achieve great performance by following the paradigm of image depth estimation, i.e., typically fine-tuning pre-trained video diffusion models with massive data. However, we argue that video depth estimation is not a naive extension of image depth estimation. The temporal consistency requirements for dynamic and static regions in videos are fundamentally different. Consistent video depth in static regions, typically backgrounds, can be more effectively achieved via stereo matching across all frames, which provides much stronger global 3D cues. While the consistency for dynamic regions still should be learned from large-scale video depth data to ensure smooth transitions, due to the violation of triangulation constraints.

Based on these insights, we introduce StereoDiff, a two-stage video depth estimator that synergizes stereo matching for mainly the static areas with video depth diffusion for maintaining consistent depth transitions in dynamic areas. We mathematically demonstrate how stereo matching and video depth diffusion offer complementary strengths through frequency domain analysis, highlighting the effectiveness of their synergy in capturing the advantages of both.

Experimental results on zero-shot, real-world, dynamic video depth benchmarks, both indoor and outdoor, demonstrate StereoDiff's SoTA performance, showcasing its superior consistency and accuracy in video depth estimation. In this paper, we provide a systemic analysis of the diffusion formulation for the dense prediction, focusing on both quality and efficiency. And we find that the original parameterization type for image generation, which learns to predict noise, is harmful for dense prediction; the multi-step noising/denoising diffusion process is also unnecessary and challenging to optimize.

Methodology

Pipeline of StereoDiff. ① All video frames are paired for stereo matching in the first stage, primarily focusing on static backgrounds, in order to achieve a strong global consistency that provided by global 3D constraints. ② Using the stereo matching-based video depth from the first stage, the second stage of StereoDiff applies a video depth diffusion for significantly improving the local consistency without sacrificing its original global consistency, resulting in video depth estimations with both strong global consistency and smooth local consistency.

Marigold training scheme

Experiments

Quantitative comparison of StereoDiff with SoTA methods on zero-shot, real-world, dynamic video depth benchmarks. The four sections from top to bottom represent: image depth estimators, stereo matching-based estimators, video depth diffusion models, and StereoDiff. To ensure comprehensive evaluation, we used two datasets: Bonn for indoor scenes and KITTI for outdoor scenes. We report the mean metric value of StereoDiff across 10 independent runs. Best results are bolded and the second best are underlined.

Comparison with other methods

Please refer to our paper linked above for more technical details :)