Name: Yinyu Nie
Position: Post Doc
E-Mail: yinyu.nie@tum.de
Phone: +49(89)289-17888
Room No: 02.07.039

Bio

I am working as a postdoctoral researcher at the Visual Computing Lab, Technical University of Munich (TUM) from 2021. Previous, I did my Ph.D. research on content-aware 3D scene understanding at the National Centre for Computer Animation, Bournemouth University from 2017 to 2020. During my Ph.D., I have visited the Chinese University of Hong Kong (Shenzhen) and Shenzhen Research Institute of Big Data from 2019 to 2020 as a visiting researcher. My research interests lie in 3D vision, and focus at 3D scene understanding, shape analysis and reconstruction. For more details, please check my personal page: https://yinyunie.github.io/

Publications

2023

Learning 3D Scene Priors with 2D Supervision
Yinyu Nie, Angela Dai, Xiaoguang Han, Matthias Nießner
CVPR 2023
We learn 3D scene priors with 2D supervision. We model a latent hypersphere surface to represent a manifold of 3D scenes, characterizing the semantic and geometric distribution of objects in 3D scenes. This supports many downstream applications, including scene synthesis, interpolation and single-view reconstruction.
[video][bibtex][project page]

2022

Pose2Room: Understanding 3D Scenes from Human Activities
Yinyu Nie, Angela Dai, Xiaoguang Han, Matthias Nießner
ECCV 2022
From an observed pose trajectory of a person performing daily activities in an indoor scene, we learn to estimate likely object configurations of the scene underlying these interactions, as set of object class labels and oriented 3D bounding boxes. By sampling from our probabilistic decoder, we synthesize multiple plausible object arrangements.
[video][bibtex][project page]

2021

RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction
Yinyu Nie, Ji Hou, Xiaoguang Han, Matthias Nießner
CVPR 2021
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces from raw point clouds. It leverages the sparsity of point cloud data and focuses on predicting shapes that are recognized with high objectness. It not only eases the difficulty of learning 2-D manifold surfaces from sparse 3D space, the point clouds in each object proposal convey shape details that support implicit function learning to reconstruct any high-resolution surfaces.
[video][code][bibtex][project page]