Real-time Simulation of Human Vision using Temporal Compositing with CUDA on the GPU
Matthias Nießner1     Nadine Kuhnert2     Kai Selgrad2     Marc Stamminger2     Georg Michelson2    
    1Technical University of Munich     2University of Erlangen-Nuremberg
Proceedings 25th Workshop on Parallel Systems and Algorithms 2013
Abstract

We present a novel approach that simulates human vision including visual defects such as glaucoma by temporal composition of human vision in real-time on the GPU. Therefore, we determine eye focus points every time step and adapt the lens accommodation of our virtual eye model accordingly. The focal distance is then used to determine bluriness of observed scene regions; i.e., we compute defocus for all visible pixels. In order to simulate the visual memory we introduce a sharpness field where we integrate defocus values temporally. That allows for memorizing sharply perceived scene points. For visualization, we ray trace the virtual scene environment while incorporating depth of field based on the sharpness field data. Thus, our algorithm facilitates the simulation of human vision mimicing the visual memory. We consider this to be particularly useful for illustration purposes for patients with visual defects such as glaucoma. In order to run our algorithm in real-time we employ massively parallel graphics hardware.