The research in our group covers the areas of computational photography,
image-based modeling and rendering, 3D scanning and volume reconstruction
with the goal to develop acquisition systems and algorithms for digitizing
and reproducing the appearance of real-world objects, ranging from small
figurines, entire streets, to planetary nebulae.
One central problem in computer graphics is the synthesis of realistic images that are indistinguishable from real photographs. The basic theory behind rendering such images has been known for a while and has been turned into a broad range of rendering algorithms ranging from slow but physically accurate frameworks to hardware-accelerated, real-time applications that make a lot of simplifications. One fundamental building block to these algorithms is the simulation of the interaction between incident illumination and the reflective properties of the scene. The limiting factor in photo-realistic image synthesis today is not the rendering per se but rather the input data passed to the algorithms. The realism of the outcome depends largely on the quality of the scene and material description. Accurate input is required for geometry, illumination and reflective properties. An efficient way to obtain realistic models is through measurement of scene attributes from real-world objects by inverse rendering. The attributes are estimated from real photographs by inverting the rendering process.
Traditional structured light 3D scanning systems are designed to capture the geometry for bright diffuse surfaces of moderate complexity. Shiny or translucent materials, e.g. metals or marble, or objects with high depth complexity typically corrupt the estimated 3D geometry producing noise or even wholes in the reconstructed surface. By designing novel capturing systems, specialized illumination patterns, and appropriate reconstruction algorithms we are able to capture the precise 3D geometry even of uncooperative static as well as dynamic objects.
The research group further focuses on developing photographic techniques for measuring the scene's reflection properties. A so-called reflectance field captures the light transport within a scene such that all local and global illumination effects, highlights, shadows, interreflections, or caustics, are recorded and can be re-rendered under arbitrary illumination. The envisioned techniques should be general enough to cope with arbitrary materials, with scenes with high depth complexity such as trees, and should allow capturing in arbitrary environments, i.e. outside a measurement laboratory.
A third thread of research of this group is computational photography with the goal to develop optical systems augmented by computational procedures; by jointly designing the capturing apparatus, i.e., the optical layout of active or passive devices such as cameras, projectors, beam-splitters, etc., together with the capturing algorithm and appropriate post-processing. Such combined systems are used to increase image quality, e.g., by removing image noise or camera shake, to emphasize or extract scene features such as edges or silhouettes by optical means, or to reconstruct volumetric 3D structures from images. We plan to devise computational photography techniques for advanced optical microscopy, large scale scene acquisition, and even astronomical imaging.