High Dynamic Range Imaging
Leader of the Group: Prof. Dr. Karol Myszkowski
Vision and Research Strategy
The common goal of all research efforts of the group is the advancement of knowledge on image perception and the development of imaging algorithms with embedded computational models of the human visual system (HVS). This approach offers signiﬁcant improvements in both the computation performance and the perceived image quality. Often, we refer to perceptual effects rather than physical effects, which puts more emphasis on the experience of the observers than physical measurements. In particular, we aim for the exploitation of perceptual effects as a means of overcoming physical limitations of display devices and enhancing the apparent image quality.
An important topic that is often neglected in computer graphics, is to account for display device characteristics in the design of rendering algorithms. This becomes even more important with the recent proliferation of display technologies, which substantially differ in terms of reproduced contrast, brightness, display size, spatial resolution, and dynamic response. In the past, we proposed a number of perception-based techniques, that enhance all those important image quality factors beyond the physical limitation of display devices. Recently, we are interested in the problem of depth reproduction and improvement of the visual comfort for stereoscopic 3D and multiscopic displays (work in collaboration with Piotr Didyk and Tobias Ritschel). We realized that on the technical level we can capitalize on our experience gained from High Dynamic Range Imaging, e. g., through seeking for analogies between tone mapping operators, which has been a subject of our intensive research, and recently proposed disparity retargeting. We additionally consider gaze direction to enhance the depth perception in the central foveal region, where the depth sensitivity is the highest, at expense of strong depth compression at the retina periphery. Here, an important issue is the presence of system latency during the update of display content. This issue turned out to be less critical in the context of disparity manipulation according to our experiences. We are also interested in reproducing visual cues which cannot be experienced while looking at traditional 2D and stereoscopic 3D displays, such as motion parallax and variable eye lens accommodation. In collaboration with Dr. Didyk we investigate motion parallax effects in displays with head tracking support. In collaboration with the group of Prof. Fuchs at UNC Chapel Hill and Dr. Luebke at NVidia, we participate in building a wide-ﬁeld-of-view varifocal near-eye display that uses a see-through deformable membrane and responds to the eye lens accommodation as a function of depth at the gaze direction (eye tracking). Our goal is to investigate the issues arising in simultaneous perception of the virtual and real world in the presence of all important visual cues such as stereovision, eye accommodation, and motion parallax. Essentially, all visual cues that the human observer experiences in the real-world observation conditions can be reproduced by such displays, albeit due to technological constraints such reproduction might have some limitations. We envision that by embedding the knowledge of human perception, while processing the content for such displays, many of such display limitations can be mitigated. Considering the recent developments in the area of light ﬁeld displays and head-mounted displays, our research proves to be timely.
Based on our past experiences in developing perception-based image and video quality metrics, and our current research on advanced displays we are actively working towards objective measurements of the stereoscopic image and light ﬁeld quality.
Research Areas and Achievements
The audience interest in stereoscopic imaging again declines as current display technologies introduce uncomfortable differences with respect to real-world observation conditions. We argue that such differences can be reduced by careful processing of the 3D content. For example, scene animation triggers the pictorial motion parallax, a relatively strong depth cue that can be freely reproduced on any 2D screen without causing any dis-comfort. We exploit the fact that in many practical scenarios, such motion parallax provides sufficiently strong depth information, so that the presence of binocular depth cues can be reduced through aggressive disparity compression. Based on our perceptual experiments we develop a joint disparity-parallax computational model that predicts apparent depth resulting from both cues, and enables a more efficient use of the limited disparity budget so that it remains in a comfortable range for the observer. The problem of limited depth budget can be signiﬁcantly alleviated when the eye ﬁxation regions can be roughly estimated. We propose a new method for stereoscopic depth adjustment that utilizes eye tracking, and performs local disparity manipulations to ﬁnd the optimum trade-off between depth reproduction and visual comfort.
Luminance adaptation is a fundamental mechanism of the visual system that enables us to see even under drastically varying illumination conditions. In the perceptual literature the luminance adaptation of individual photoreceptors is assumed to be independent or it is assumed that pooling is performed in a small retinal region to account for lateral interconnection between neighboring photoreceptors. In the HDR literature typically ad-hoc pooling radiuses are considered. We contribute to both ﬁelds by proposing a data-driven luminance adaptation that accounts for local content in natural images. Also, we propose a practical algorithm for capturing HDR videos by mobile phones, where our multi-exposure technique adaptively selects the exposure setting for each frame, so that the correspondence between frames can be robustly computed and ghosting artifacts can be minimized.
Locally Varying Frame Rates
The visual quality of a motion picture is signiﬁcantly inﬂuenced by the choice of the display frame rates. Increasing the frame rate improves the clarity of the image and helps to alleviate many artifacts, such as blur, strobing, ﬂicker, or judder. These beneﬁts, however, come at the price of losing well-established ﬁlm aesthetics, often referred to as the “cinematic look”. We propose a novel perception-calibrated temporal ﬁltering technique which enables emulating the whole spectrum of presentation frame rates on a single-frame-rate display. By varying ﬁltering parameters we can achieve an impression of continuously varying display frame rates in both the spatial and temporal dimensions.
Perceptual BRDF Modeling
Another important issue in computer graphics is the rendering of material appearance, where unprecedented level of realism may be achieved by using the captured Bi-directional Reﬂectance Distribution Functions. We propose a new image-based methodology for comparing different anisotropic BRDFs, which might be captured with different angular resolutions and accuracy, undergo an aggressive compression, or can be ﬁtted to analytical reﬂectance models. Also, we propose an intuitive control space for predictable editing of such captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. Through a massive perceptual experiment we correlate high-level perceptual attributes with an underlying PCA-based BRDF representation, so that the material appearance editing can be directly performed for such high-level attributes.