Displays play a vital role in many professional and personal activities. They are a crucial interface between a user and the digital world in tasks involving visualization and interaction with digital data. The abilities of new display technologies regarding reproduction of important visual cues, such as binocular disparity, accommodation, or motion parallax, outperform the capabilities of methods for optimizing graphics content to match the requirements of particular hardware designs. This leads to a poor visual quality and massive computational overhead, which hamper the adoption of novel displays. I argue that there are significant gaps between hardware, computational techniques, and understanding of human perception, which prevents taking full advantage of these technologies.
To overcome these limitations, I and my team will combine hardware, computation, and perception into a unique platform where the capabilities of displays and quality requirements are represented in a shared space. The basis for our project will be in-depth understanding of human perception. Our experiments will focus on three aspects: (1) investigation of perceptual limits across a wide field of view, (2) involving all visual cues, and (3) establishing optimal trade-offs between different quality aspects. We will build efficient computational models that will predict perceived quality and enable perceptual optimizations to drive new content adaptation techniques.
This project will contribute display-specific perceptual optimizations of graphics content to match the requirements of human perception. It will address the key aspects of portable devices such as energy efficiency and visual quality. Our experiments and modeling of human perception will provide crucial insights into new hardware developments. The contributions will be necessary for development and standardization of new, high-quality display devices which will not only improve existing applications but also enable new ones.