D2
Computer Vision and Machine Learning

Verica Lazova (PhD Student)

MSc Verica Lazova

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus
Standort
-
Telefon
+49 681 9325 2000
Fax
+49 681 9325 2099

Personal Information

Research Interests

  • Human modeling from images
  • Generative models
  • Computer vision
  • Machine learning

Education

  • Ph.D. student, Perceiving and modelling people from video and images, Max-Planck-Institute for Informatics, Saarbrücken, Germany, (February 2019 - present)
  • M.Sc., Computer Science, International Max Planck Research School (IMPRS) and Saarland University, Saarbrücken, Germany, (October 2016 - February 2019)
  • B.Sc., Computer Science and Engeneering, Ss. Cyril and Methodius University, Skopje, Macedonia, (September 2011 - September 2015)

Other

http://virtualhumans.mpi-inf.mpg.de/people/Lazova.html

Publications

2022
Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
V. Lazova, V. Guzov, K. Olszewski, S. Tulyakov and G. Pons-Moll
Technical Report, 2022
(arXiv: 2204.10850)
Abstract
We present a novel method for performing flexible, 3D-aware image content<br>manipulation while enabling high-quality novel view synthesis. While NeRF-based<br>approaches are effective for novel view synthesis, such models memorize the<br>radiance for every point in a scene within a neural network. Since these models<br>are scene-specific and lack a 3D scene representation, classical editing such<br>as shape manipulation, or combining scenes is not possible. Hence, editing and<br>combining NeRF-based scenes has not been demonstrated. With the aim of<br>obtaining interpretable and controllable scene representations, our model<br>couples learnt scene-specific feature volumes with a scene agnostic neural<br>rendering network. With this hybrid representation, we decouple neural<br>rendering from scene-specific geometry and appearance. We can generalize to<br>novel scenes by optimizing only the scene-specific 3D feature representation,<br>while keeping the parameters of the rendering network fixed. The rendering<br>function learnt during the initial training stage can thus be easily applied to<br>new scenes, making our approach more flexible. More importantly, since the<br>feature volumes are independent of the rendering model, we can manipulate and<br>combine scenes by editing their corresponding feature volumes. The edited<br>volume can then be plugged into the rendering model to synthesize high-quality<br>novel views. We demonstrate various scene manipulations, including mixing<br>scenes, deforming objects and inserting objects into scenes, while still<br>producing photo-realistic results.<br>
2021
Stereo Radiance Fields (SRF): Learning View Synthesis from Sparse Views of Novel Scenes
J. Chibane, A. Bansal, V. Lazova and G. Pons-Moll
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 2021
2019
360-Degree Textures of People in Clothing from a Single Image
V. Lazova, E. Insafutdinov and G. Pons-Moll
International Conference on 3D Vision, 2019
Texture Completion of People in Diverse Clothing
V. Lazova
PhD Thesis, Universität des Saarlandes, 2019