Name: | Ji Hou (侯骥) |
---|---|
Position: | Ph.D Candidate |
E-Mail: | ji.hou@tum.de |
Phone: | +49-89-289-19595 |
Room No: | 02.13.35 |
Panoptic 3D Scene Reconstruction From a Single RGB Image |
---|
Manuel Dahnert, Ji Hou, Matthias Nießner, Angela Dai |
NeurIPS 2021 |
Panoptic 3D Scene Reconstruction combines the tasks of 3D reconstruction, semantic segmentation and instance segmentation. From a single RGB image we predict 2D information and lift these into a sparse volumetric 3D grid, where we predict geometry, semantic labels and 3D instance labels. |
[video][code][bibtex][project page] |
Pri3D: Can 3D Priors Help 2D Representation Learning? |
---|
Ji Hou, Saining Xie, Benjamin Graham, Angela Dai, Matthias Nießner |
ICCV 2021 |
Pri3D leverages 3D priors for downstream 2D image understanding tasks: during pre-training, we incorporate view-invariant and geometric priors from color-geometry information given by RGB-D datasets, imbuing geometric priors into learned features. We show that these 3D-imbued learned features can effectively transfer to improved performance on 2D tasks such as semantic segmentation, object detection, and instance segmentation. |
[video][code][bibtex][project page] |
RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction |
---|
Yinyu Nie, Ji Hou, Xiaoguang Han, Matthias Nießner |
CVPR 2021 |
We introduce RfD-Net that jointly detects and reconstructs dense object surfaces from raw point clouds. It leverages the sparsity of point cloud data and focuses on predicting shapes that are recognized with high objectness. It not only eases the difficulty of learning 2-D manifold surfaces from sparse 3D space, the point clouds in each object proposal convey shape details that support implicit function learning to reconstruct any high-resolution surfaces. |
[video][code][bibtex][project page] |
Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts |
---|
Ji Hou, Benjamin Graham, Matthias Nießner, Saining Xie |
CVPR 2021 (Oral) |
Our study reveals that exhaustive labelling of 3D point clouds might be unnecessary; and remarkably, on ScanNet, even using 0.1% of point labels, we still achieve 89% (instance segmentation) and 96% (semantic segmentation) of the baseline performance that uses full annotations. |
[video][bibtex][project page] |
RevealNet: Seeing Behind Objects in RGB-D Scans |
---|
Ji Hou, Angela Dai, Matthias Nießner |
CVPR 2020 |
This paper introduces the task of semantic instance completion: from an incomplete, RGB-D scan of a scene, we detect the individual object instances comprising the scene and jointly infer their complete object geometry. |
[bibtex][project page] |
3D-SIS: 3D Semantic Instance Segmentation of RGB-D Scans |
---|
Ji Hou, Angela Dai, Matthias Nießner |
CVPR 2019 (Oral) |
We introduce 3D-SIS, a novel neural network architecture for 3D semantic instance segmentation in commodity RGB-D scans. |
[video][code][bibtex][project page] |