Our group conducts foundational research at the intersection between computer graphics and machine learning with a focus on rendering. We seek to develop algorithms and data structures which enable fast and high-quality image generation for interactive virtual environments. To this end, we consider a broad range of approaches for synthesizing images.
Traditional rendering frameworks usually employ a mesh-based scene representation and use ray-tracing or rasterization techniques to produce a discrete pixel grid of color values. We explore alternative and complementary designs of the image synthesis pipeline, especially (but not exclusively) in view of recent advances in machine learning. Meshes as scene representations are complemented by implicit fields, point clouds, mere images, or expressive latent spaces emerging from training data. Ray-tracing and rasterization are augmented with neural networks, while pixel grids transition into continuous multi-scale image representations. To maintain the efficiency required for interactive virtual environments, we develop natively parallel solutions.
We consider all types of image synthesis algorithms and explore the entire continuum of techniques. This includes physically-based, image-based, and data-driven/neural rendering - each of which offer different advantages we seek to reconcile.
Statistical image models using deep-learning techniques are approaching photo-realism. We investigate how these expressive models can be included into the image synthesis pipeline in a controllable manner.
Scene and Image Representations
The representation of a scene or an image is crucially linked to synthetic image generation. We therefore explore different representations, including meshes, implicit fields, point clouds, etc. in combination with their corresponding properties in terms of image quality, efficiency, sparsity, and controllability.
Efficiency and Parallelism
Many rendering applications, such as virtual and mixed reality, require visual feedback in the order of milliseconds. To meet these extreme constraints, we conduct algorithm development with in-built parallelism, such that we can benefit from specialized hardware such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs).