RC-MVSNet: Unsupervised Multi-View Stereo with Neural Rendering
Di Chang1     Aljaž Božič1     Tong Zhang2     Qingsong Yan3     Yingcong Chen3     Sabine Süsstrunk2     Matthias Nießner1    
    1Technical University of Munich     2École Polytechnique Fédérale de Lausanne     3Hong Kong University of Science and Technology
ECCV 2022
Abstract

Finding accurate correspondences among different views is the Achilles' heel of unsupervised Multi-View Stereo (MVS). Existing methods are built upon the assumption that corresponding pixels share similar photometric features. However, multi-view images in real scenarios observe non-Lambertian surfaces and experience occlusions. In this work, we propose a novel approach with neural rendering (RC-MVSNet) to solve such ambiguity issues of correspondences among views. Specifically, we impose a depth rendering consistency loss to constrain the geometry features close to the object surface to alleviate occlusions. Concurrently, we introduce a reference view synthesis loss to generate consistent supervision, even for non-Lambertian surfaces. Extensive experiments on DTU and Tanks\&Temples benchmarks demonstrate that our RC-MVSNet approach achieves state-of-the-art performance over unsupervised MVS frameworks and competitive performance to many supervised methods.
Extras

Paper

Bibtex