Name: | Norman Müller |
---|---|
Position: | Ph.D Candidate |
E-Mail: | norman.mueller@tum.de |
Phone: | +49-89-289-19595 |
Room No: | 02.07.034 |
GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields |
---|
Barbara Rössle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner |
SIGGRAPH Asia 2023 |
GANeRF proposes an adversarial formulation whose gradients provide feedback for a 3D-consistent neural radiance field representation. This introduces additional constraints that enable more realistic novel view synthesis. |
[video][bibtex][project page] |
Panoptic Lifting for 3D Scene Understanding with Neural Fields |
---|
Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulo, Norman Müller, Matthias Nießner, Angela Dai, Peter Kontschieder |
CVPR 2023 |
Given only RGB images of an in-the-wild scene as input, Panoptic Lifting optimizes a panoptic radiance field which can be queried for color, depth, semantics, and instances for any point in space. |
[bibtex][project page] |
DiffRF: Rendering-guided 3D Radiance Field Diffusion |
---|
Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulo, Peter Kontschieder, Matthias Nießner |
CVPR 2023 |
DiffRF is a denoising diffusion probabilistic model directly operating on 3D radiance fields and trained with an additional volumetric rendering loss. This enables learning strong radiance priors with high rendering quality and accurate geometry. This appraoch naturally enables tasks like 3D masked completion or image-to-volume synthesis. |
[video][bibtex][project page] |
AutoRF: Learning 3D Object Radiance Fields from Single View Observations |
---|
Norman Müller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder |
CVPR 2022 |
From just a single view, we learn neural 3D object representations for free novel view synthesis. This setting is in stark contrast to the majority of existing works that leverage multiple views of the same object, employ explicit priors during training, or require pixel-perfect annotations. Our method decouples object geometry, appearance, and pose enabling generalization to unseen objects, even across different datasets of challenging real-world street scenes such as nuScenes, KITTI, and Mapillary Metropolis. |
[video][bibtex][project page] |
Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences |
---|
Norman Müller, Yu-Shiang Wong, Niloy J. Mitra, Angela Dai, Matthias Nießner |
CVPR 2021 |
We introduce a novel method to jointly predict complete geometry and dense correspondences of rigidly moving objects for 3D multi-object tracking on RGB-D sequences. By hallucinating unseen regions of objects, we can obtain additional correspondences between the same instance, thus providing robust tracking even under strong change of appearance. |
[video][bibtex][project page] |