Name: | Yinyu Nie |
---|---|
Position: | Post Doc |
E-Mail: | yinyu.nie@tum.de |
Phone: | +49(89)289-17888 |
Room No: | 02.07.039 |
Learning 3D Scene Priors with 2D Supervision |
---|
Yinyu Nie, Angela Dai, Xiaoguang Han, Matthias Nießner |
CVPR 2023 |
We learn 3D scene priors with 2D supervision. We model a latent hypersphere surface to represent a manifold of 3D scenes, characterizing the semantic and geometric distribution of objects in 3D scenes. This supports many downstream applications, including scene synthesis, interpolation and single-view reconstruction. |
[video][bibtex][project page] |
Pose2Room: Understanding 3D Scenes from Human Activities |
---|
Yinyu Nie, Angela Dai, Xiaoguang Han, Matthias Nießner |
ECCV 2022 |
From an observed pose trajectory of a person performing daily activities in an indoor scene, we learn to estimate likely object configurations of the scene underlying these interactions, as set of object class labels and oriented 3D bounding boxes. By sampling from our probabilistic decoder, we synthesize multiple plausible object arrangements. |
[video][bibtex][project page] |
RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction |
---|
Yinyu Nie, Ji Hou, Xiaoguang Han, Matthias Nießner |
CVPR 2021 |
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces from raw point clouds. It leverages the sparsity of point cloud data and focuses on predicting shapes that are recognized with high objectness. It not only eases the difficulty of learning 2-D manifold surfaces from sparse 3D space, the point clouds in each object proposal convey shape details that support implicit function learning to reconstruct any high-resolution surfaces. |
[video][code][bibtex][project page] |