Name: Yawar Siddiqui
Position: Ph.D Candidate
E-Mail: yawar.siddiqui@tum.de
Phone: +49-89-289-18489
Room No: 02.07.037

Bio

Hi! I'm Yawar. I did my Bachelor's in Computer Science in India and Master's from TU Munich. I worked on Active Learning for Semantic Segmentation as my Master Thesis with the Visual Computing Lab before joining as an intern. In the past, I've worked with Adobe Systems - Core technologies Imaging team in India, and as an intern with Disney Research, Zurich on Neck Tracking and Reconstruction.

Research Interest

2D-3D Scene Understanding, Human Body Reconstruction

Publications

2021

SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans
Angela Dai, Yawar Siddiqui, Justus Thies, Julien Valentin, Matthias Nießner
CVPR 2021
We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion. Rather than relying on 3D reconstruction losses to inform our 3D geometry and color reconstruction, we propose adversarial and perceptual losses operating on 2D renderings in order to achieve high-resolution, high-quality colored reconstructions of scenes.
[video][code][bibtex][project page]

2020

ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation
Yawar Siddiqui, Julien Valentin, Matthias Nießner
CVPR 2020
We propose ViewAL, a novel active learning strategy for semantic segmentation that exploits viewpoint consistency in multi-view datasets.
[video][code][bibtex][project page]