RC-MVSNet: Unsupervised Multi-View Stereo with Neural Rendering
Oct 23, 2022·,,,
,,·
0 min read
Di Chang
Aljaz Bozic
Tong Zhang
Qingsong Yan
![Ying-Cong Chen](/author/ying-cong-chen/avatar_huf7eea4ac248488669ee6328ad9b22549_1192969_50x50_fill_q80_h2_lanczos_center.webp)
Ying-Cong Chen
Sabine Susstrunk
Matthias Niebner
![](/publication/2022-di-rc-mvsnet/featured_hud447286df9d20b16141fe5eb7d797f38_613917_41a5a3c47755989b951b7ac7ecf5cabe.webp)
Abstract
Finding accurate correspondences among different views is the Achilles heel of unsupervised Multi-View Stereo (MVS). Existing methods are built upon the assumption that corresponding pixels share similar photometric features. However, multi-view images in real scenarios observe non-Lambertian surfaces and experience occlusions. In this work, we propose a novel approach with neural rendering (RC-MVSNet) to solve such ambiguity issues of correspondences among views. Specifically, we impose a depth rendering consistency loss to constrain the geometry features close to the object surface to alleviate occlusions. Concurrently, we introduce a reference view synthesis loss to generate consistent supervision, even for non-Lambertian surfaces. Extensive experiments on DTU and Tanks&Temples benchmarks demonstrate that our RC-MVSNet approach achieves state-of-the-art performance over unsupervised MVS frameworks and competitive performance to many supervised methods.
Type
Publication
Proceedings of the European conference on computer vision (ECCV)