Name: Justus Thies
Position: Post Doctor
E-Mail: justus.thies@tum.de
Phone: +49-89-289-18456
Room No: 02.13.042

Bio

Justus Thies is working as a postdoctoral researcher at the Technical University of Munich. In September 2017 he joined the Visual Computing Lab of Prof.Dr. Matthias Nießner. Previous, he was a PhD student at the University of Erlangen-Nuremberg under the supervision of Gunther Greiner. He started his PhD studies in 2014 after receiving his Master of Science degree from the University of Erlangen-Nuremberg. During the time as a PhD student he collaborated with other institutes and did internships at Stanford University and the Max-Planck Institut Informatik. His research focuses on real-time facial performance capturing and expression transfer using commodity hardware. Thus, he is interested in Computer Vision and Computer Graphics, as well as in efficient implementations of optimization techniques, especially on graphics hardware. His publications opened up a new research field - real-time facial reenactment. The achieved quality, efficiency and the reduced hardware requirements of his developed methods raised a lot of attention in academia, industry and media. The dissertation "Face2Face: Real-time Facial Reenactment" of Justus Thies summarizes these publications and discusses the implications of the demonstrated technologies. Beside computer science, he has a strong interest in mechanical engineering and numerical controlled machines (CNC machines). His hobbies include building CNC machines, remote controlled quad-copters, planes and cars.

Publications

2018

FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces
Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner
arXiv 2018
In this paper, we introduce FaceForensics, a large scale video dataset consisting of 1004 videos with more than 500000 frames, altered with Face2Face, that can be used for forgery detection and to train generative refinement methods.
[video][bibtex][project page]

HeadOn: Real-time Reenactment of Human Portrait Videos
Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner
ACM Transactions on Graphics 2018 (TOG)
We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor.
[video][bibtex][project page]

FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality
Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner
ACM Transactions on Graphics 2018 (TOG)
We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD).
[video][bibtex][project page]

Deep Video Portaits
Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt
ACM Transactions on Graphics 2018 (TOG)
Our novel approach enables photo-realistic re-animation of portrait videos using only an input video. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor.
[video][bibtex][project page]

InverseFaceNet: Deep Monocular Inverse Face Rendering
Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt
CVPR 2018
We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image. By estimating all parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible in real time.
[video][bibtex][project page]

State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications
Michael Zollhöfer, Justus Thies, Derek Bradley, Pablo Garrido, Thabo Beeler, Patrick Pérez, Marc Stamminger, Matthias Nießner, Christian Theobalt
Eurographics 2018
This state-of-the-art report summarizes recent trends in monocular facial performance capture and discusses its applications, which range from performance-based animation to real-time facial reenactment. We focus our discussion on methods where the central task is to recover and track a three dimensional model of the human face using optimization-based reconstruction algorithms.
[bibtex][project page]

2017

FaceForge: Markerless Non-Rigid Face Multi-Projection Mapping
Christian Siegl, Vanessa Lange, Marc Stamminger, Frank Bauer, Justus Thies
ISMAR 2017
In this paper, we introduce FaceForge, a multi-projection mapping system that is able to alter the appearance of a non-rigidly moving human face in real time.
[bibtex][project page]

2016

Face2Face: Real-time Face Capture and Reenactment of RGB Videos
Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner
CVPR 2016 (Oral)
We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion.
[video][bibtex][supplemental][project page]

2015

Real-time Expression Transfer for Facial Reenactment
Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, Christian Theobalt
ACM Transactions on Graphics 2015 (TOG)
We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor.
[video][bibtex][project page]

Real-Time Pixel Luminance Optimization for Dynamic Multi-Projection Mapping
Christian Siegl, Matteo Colaianni, Lucas Thies, Justus Thies, Michael Zollhöfer, Shahram Izadi, Marc Stamminger, Frank Bauer
ACM Transactions on Graphics 2015 (TOG)
Using projection mapping enables us to bring virtual worlds into shared physical spaces. In this paper, we present a novel, adaptable and real-time projection mapping system, which supports multiple projectors and high quality rendering of dynamic content on surfaces of complex geometrical shape. Our system allows for smooth blending across multiple projectors using a new optimization framework that simulates the diffuse direct light transport of the physical world to continuously adapt the color output of each projector pixel.
[video][bibtex][project page]