Sampling and Rendering

Leader of the group: Gurprit Singh, PhD

Vision and Research Strategy

Our group is dedicated towards developing a modern deep learning pipeline for sampling and rendering algorithms. We strive to provide an end-to-end workflow for applications ranging from offline production rendering to real-time devices for virtual and augmented reality (VR/AR). Each application has different requirements, however, in  all cases the rendered image quality is directly affected by the underlying sampling strategy and the sample correlations. Our group aims to develop sound theoretical error formulations based on Monte Carlo and Quasi-Monte Carlo (MCQMC) literature to administer sample correlations across multiple dimensions. Consequently, this would allow designing task-specific loss functions for training purposes. Simultaneously, we equip modern deep learning architectures with the state-of-the-art quality metrics to deliver proof-of-concepts. However, exisiting neural network (NN) architectures are designed and optimized for either structured data like images or point clouds limited to 3D only. Towards this end, we are developing optimization techniques to scale deep NNs for our purposes that would allow progressive generation of millions of correlated point samples per second in high dimensions (100s). We hope that our research would also benefit communities from computer vision, geometry and shape analysis - that deals with unstructured data (e.g., point clouds) - by establishing a coherent exchange of knowledge.

Research Areas and Achievements

Our research areas can be classified into two broad categories: analysis of sampling patterns and rendering algorithms and deep network synthesis. We focus on understanding how different sampling schemes and their correlations affect error during MC estimation, followed by designing NN architectures from the knowledge gained from these analyzes.


Sampling and Rendering Analysis

To derive optimal benefits from a deep learning rendering pipeline, we need to design loss functions that reflect not only local but also global interactions: both within samples and the light transport. This requires analyzing sample correlations and how well a rendering algorithm explores the underlying manifold.
We recently taught a course at SIGGRAPH ASIA 2018 and are publishing an EG STAR 2019 report (conditionally accepted) that summarizes various spatial and Fourier statistical tools developed over the years to better understand the impact of sample correlations on the MC estimation error. Importance functions from which samples are derived also dramatically affect the estimation error. We have performed an in-depth theoretical analysis that analyzes importance sampling in conjunction with correlated samples (to be published as a CGF 2019 journal paper). Our spectral domain expertise have drawn attention in other fields. We partially assisted on developing a spectral measure of distortions for change detection in dynamic graphs (Complex Networks 2018) and a perception driven hybrid-decomposition model for multi-layer VR displays (IEEE VR 2019).
Concurrently, we are analyzing rendering algorithms in the gradient domain to alleviate the non-differentiable nature of the light transport integrals (due to the visibility function present within the integral).


Deep Network Synthesis

Several sampling design principles have been established in the past few years (first authored SIGGRAPH 2017 & 2015 by G. Singh) following a rigorous theoretical and empirical analysis. Current state-of-the-art sampling algorithms are not able to realize these futuristic designs while preserving important correlations across multiple dimensions. We combine modern deep learning architectures with these sampling design principles to model loss functions that can abstractly handle local and global interactions. This results in training kernels that can produce stochastic samples with required correlations in a matter of seconds. We published the proof-of-concept deep learning based system (arXiv tech-report, 2018) that proposes several spatial and Fourier tools as a part of loss functions. In recent work, we achieve state-of-the-art quality results and pushed it further to higher dimensions. We also develop a memoryless deep learning system alleviating the constraints due to GPU memory limits. This marks another step towards our goal to provide an end-to-end learning pipeline for sampling and rendering algorithms.