Publications

2017
Adhikarla, V.K., Vinkler, M., Sumin, D., et al. 2017. Towards a Quality Metric for Dense Light Fields. http://arxiv.org/abs/1704.07576.
(arXiv: 1704.07576)
Abstract
Light fields become a popular representation of three dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light- fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics.
Export
BibTeX
@online{AdhikarlaArXiv17, TITLE = {Towards a Quality Metric for Dense Light Fields}, AUTHOR = {Adhikarla, Vamsi Kiran and Vinkler, Marek and Sumin, Denis and Mantiuk, Rafa{\l} K. and Myszkowski, Karol and Seidel, Hans-Peter and Didyk, Piotr}, URL = {http://arxiv.org/abs/1704.07576}, EPRINT = {1704.07576}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Light fields become a popular representation of three dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light- fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics.}, }
Endnote
%0 Report %A Adhikarla, Vamsi Kiran %A Vinkler, Marek %A Sumin, Denis %A Mantiuk, Rafał K. %A Myszkowski, Karol %A Seidel, Hans-Peter %A Didyk, Piotr %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Towards a Quality Metric for Dense Light Fields : %U http://hdl.handle.net/11858/00-001M-0000-002D-2C2C-1 %U http://arxiv.org/abs/1704.07576 %D 2017 %X Light fields become a popular representation of three dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light- fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Akyűz, A.O., Tursun, O.T., Hasić-Telalović, J., and Karađuzović-Hadžiabdić, K. 2017. Ghosting in HDR Video. In: High Dynamic Range Video. Elsevier, Amsterdam.
Export
BibTeX
@incollection{hdrvideo2017, TITLE = {Ghosting in {HDR} Video}, AUTHOR = {Aky{\H u}z, Ahmet O{\u g}uz and Tursun, Okan Tarhan and Hasi{\'c}-Telalovi{\'c}, Jasminka and Kara{\dj}uzovi{\'c}-Had{\v z}iabdi{\'c}, Kanita}, LANGUAGE = {eng}, ISBN = {978-0-12-809477-8}, DOI = {10.1016/B978-0-12-809477-8.00001-7}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {High Dynamic Range Video}, EDITOR = {Chalmers, Alan and Campisi, Patrizio and Shiley, Peter and Olaizola, Igor G.}, PAGES = {3--44}, }
Endnote
%0 Book Section %A Akyűz, Ahmet Oğuz %A Tursun, Okan Tarhan %A Hasić-Telalović, Jasminka %A Karađuzović-Hadžiabdić, Kanita %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Ghosting in HDR Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4BAD-A %R 10.1016/B978-0-12-809477-8.00001-7 %D 2017 %B High Dynamic Range Video %E Chalmers, Alan; Campisi, Patrizio; Shiley, Peter; Olaizola, Igor G. %P 3 - 44 %I Elsevier %C Amsterdam %@ 978-0-12-809477-8
Arabadzhiyska, E., Tursun, O.T., Myszkowski, K., Seidel, H.-P., and Didyk, P. Saccade Landing Position Prediction for Gaze-Contingent Rendering. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2017) 36, 4.
(Accepted/in press)
Export
BibTeX
@article{ArabadzhiyskaSIGGRAPH2017, TITLE = {Saccade Landing Position Prediction for Gaze-Contingent Rendering}, AUTHOR = {Arabadzhiyska, Elena and Tursun, Okan Tarhan and Myszkowski, Karol and Seidel, Hans-Peter and Didyk, Piotr}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2017}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {36}, NUMBER = {4}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2017}, }
Endnote
%0 Journal Article %A Arabadzhiyska, Elena %A Tursun, Okan Tarhan %A Myszkowski, Karol %A Seidel, Hans-Peter %A Didyk, Piotr %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Saccade Landing Position Prediction for Gaze-Contingent Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-7D82-9 %D 2017 %J ACM Transactions on Graphics %V 36 %N 4 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2017 %O ACM SIGGRAPH 2017 Los Angeles, California, 30 July - 3 August
Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., and Theobalt, C. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration. ACM Transactions on Graphics.
(Accepted/in press)
Export
BibTeX
@article{dai2016bundleTOG, TITLE = {{BundleFusion}: {R}eal-time Globally Consistent {3D} Reconstruction using On-the-fly Surface Re-integration}, AUTHOR = {Dai, Angela and Nie{\ss}ner, Matthias and Zollh{\"o}fer, Michael and Izadi, Shahram and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2017}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM Transactions on Graphics}, }
Endnote
%0 Journal Article %A Dai, Angela %A Nießner, Matthias %A Zollhöfer, Michael %A Izadi, Shahram %A Theobalt, Christian %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6796-B %D 2017 %J ACM Transactions on Graphics %I ACM %C New York, NY %@ false
Derler, A., Zayer, R., Seidel, H.-P., and Steinberger, M. 2017. Dynamic Scheduling for Efficient Hierarchical Sparse Matrix Operations on the GPU. ICS 2017, International Conference on Supercomputing, ACM.
Export
BibTeX
@inproceedings{DerlerICS2017, TITLE = {Dynamic Scheduling for Efficient Hierarchical Sparse Matrix Operations on the {GPU}}, AUTHOR = {Derler, Andreas and Zayer, Rhaleb and Seidel, Hans-Peter and Steinberger, Markus}, LANGUAGE = {eng}, ISBN = {978-1-4503-5020-4}, DOI = {10.1145/3079079.3079085}, PUBLISHER = {ACM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {ICS 2017, International Conference on Supercomputing}, EID = {7}, ADDRESS = {Chicago, IL, USA}, }
Endnote
%0 Conference Proceedings %A Derler, Andreas %A Zayer, Rhaleb %A Seidel, Hans-Peter %A Steinberger, Markus %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Dynamic Scheduling for Efficient Hierarchical Sparse Matrix Operations on the GPU : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-7D73-D %R 10.1145/3079079.3079085 %D 2017 %B International Conference on Supercomputing %Z date of event: 2017-06-13 - 2017-06-16 %C Chicago, IL, USA %B ICS 2017 %Z sequence number: 7 %I ACM %@ 978-1-4503-5020-4
Dunn, D., Tippets, C., Torell, K., et al. 2017. Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors. IEEE Transactions on Visualization and Computer Graphics (Proc. VR 2017) 23, 4.
Export
BibTeX
@article{DunnVR2017, TITLE = {Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors}, AUTHOR = {Dunn, David and Tippets, Cary and Torell, Kent and Kellnhofer, Petr and Ak{\c s}it, Kaan and Didyk, Piotr and Myszkowski, Karol and Luebke, David and Fuchs, Henry}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2017.2657058}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics (Proc. VR)}, VOLUME = {23}, NUMBER = {4}, PAGES = {1322--1331}, BOOKTITLE = {Selected Proceedings IEEE Virtual Reality 2017 (VR 2017)}, }
Endnote
%0 Journal Article %A Dunn, David %A Tippets, Cary %A Torell, Kent %A Kellnhofer, Petr %A Akşit, Kaan %A Didyk, Piotr %A Myszkowski, Karol %A Luebke, David %A Fuchs, Henry %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-3095-4 %R 10.1109/TVCG.2017.2657058 %7 2017 %D 2017 %J IEEE Transactions on Visualization and Computer Graphics %V 23 %N 4 %& 1322 %P 1322 - 1331 %I IEEE Computer Society %C New York, NY %@ false %B Selected Proceedings IEEE Virtual Reality 2017 %O VR 2017 Los Angeles, California on March 18-22, 2017 %U http://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/
Elhayek, A., de Aguiar, E., Jain, A., et al. 2017. MARCOnI-ConvNet-Based MARker-Less Motion Capture in Outdoor and Indoor Scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 3.
Export
BibTeX
@article{elhayek2016marconi, TITLE = {{MARCOnI}-{ConvNet}-Based {MARker}-Less Motion Capture in Outdoor and Indoor Scenes}, AUTHOR = {Elhayek, Ahmed and de Aguiar, Edilson and Jain, Arjun and Thompson, J. and Pishchulin, Leonid and Andriluka, Mykhaylo and Bregler, C. and Schiele, Bernt and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0162-8828}, DOI = {10.1109/TPAMI.2016.2557779}, PUBLISHER = {IEEE Computer Society.}, ADDRESS = {New York}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, VOLUME = {33}, NUMBER = {3}, PAGES = {501--514}, }
Endnote
%0 Journal Article %A Elhayek, Ahmed %A de Aguiar, Edilson %A Jain, Arjun %A Thompson, J. %A Pishchulin, Leonid %A Andriluka, Mykhaylo %A Bregler, C. %A Schiele, Bernt %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T MARCOnI-ConvNet-Based MARker-Less Motion Capture in Outdoor and Indoor Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6510-5 %R 10.1109/TPAMI.2016.2557779 %7 2016 %D 2017 %J IEEE Transactions on Pattern Analysis and Machine Intelligence %O IEEE Trans. Pattern Anal. Mach. Intell. PAMI %V 33 %N 3 %& 501 %P 501 - 514 %I IEEE Computer Society. %C New York %@ false
Fox, G., Meka, A., Zollhöfer, M., Richardt, C., and Theobalt, C. 2017. Live User-guided Intrinsic Video For Static Scenes. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance.
Export
BibTeX
@techreport{Report2017-4-001, TITLE = {Live User-guided Intrinsic Video For Static Scenes}, AUTHOR = {Fox, Gereon and Meka, Abhimitra and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2017-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Fox, Gereon %A Meka, Abhimitra %A Zollhöfer, Michael %A Richardt, Christian %A Theobalt, Christian %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Live User-guided Intrinsic Video For Static Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5DA7-3 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2017 %P 12 p. %X We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance. %B Research Report %@ false
Haubenwallner, K., Seidel, H.-P., and Steinberger, M. 2017. ShapeGenetics: Using Genetic Algorithms for Procedural Modeling. Computer Graphics Forum (Proc. EUROGRAPHICS 2017) 36, 2.
Export
BibTeX
@article{haubenwallner2017shapegenetics, TITLE = {{ShapeGenetics}: {U}sing Genetic Algorithms for Procedural Modeling}, AUTHOR = {Haubenwallner, Karl and Seidel, Hans-Peter and Steinberger, Markus}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.13120}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {36}, NUMBER = {2}, PAGES = {213--223}, BOOKTITLE = {The European Association for Computer Graphics 38th Annual Conference (EUROGRAPHICS 2017)}, }
Endnote
%0 Journal Article %A Haubenwallner, Karl %A Seidel, Hans-Peter %A Steinberger, Markus %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T ShapeGenetics: Using Genetic Algorithms for Procedural Modeling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5C69-8 %R 10.1111/cgf.13120 %7 2017 %D 2017 %J Computer Graphics Forum %V 36 %N 2 %& 213 %P 213 - 223 %I Wiley-Blackwell %C Oxford %@ false %B The European Association for Computer Graphics 38th Annual Conference %O EUROGRAPHICS 2017 Lyon, France, 24-28 April 2017 EG 2017
Jiang, C., Tang, C., Seidel, H.-P., and Wonka, P. Design and Volume Optimization of Space Structures. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2017) 36, 4.
(Accepted/in press)
Export
BibTeX
@article{JiangSIGGRAPH2017, TITLE = {Design and Volume Optimization of Space Structures}, AUTHOR = {Jiang, Caigui and Tang, Chengcheng and Seidel, Hans-Peter and Wonka, Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2017}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {36}, NUMBER = {4}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2017}, }
Endnote
%0 Journal Article %A Jiang, Caigui %A Tang, Chengcheng %A Seidel, Hans-Peter %A Wonka, Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Design and Volume Optimization of Space Structures : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-7D8E-2 %D 2017 %J ACM Transactions on Graphics %V 36 %N 4 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2017 %O ACM SIGGRAPH 2017 Los Angeles, California, 30 July - 3 August
Kalojanov, J. 2017. R-symmetry for Triangle Meshes: Detection and Applications. .
Abstract
In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.
Export
BibTeX
@phdthesis{Kalojanovphd2017, TITLE = {R-symmetry for Triangle Meshes: Detection and Applications}, AUTHOR = {Kalojanov, Javor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.}, }
Endnote
%0 Thesis %A Kalojanov, Javor %Y Slusallek, Philipp %A referee: Wand, Michael %A referee: Mitra, Niloy %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T R-symmetry for Triangle Meshes: Detection and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-96A3-B %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 94 p. %V phd %9 phd %X In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6787/
Kol, T.R., Klehm, O., Seidel, H.-P., and Eisemann, E. 2017. Expressive Single Scattering for Light Shaft Stylization. IEEE Transactions on Visualization and Computer Graphics 23, 7.
Export
BibTeX
@article{kol2016expressive, TITLE = {Expressive Single Scattering for Light Shaft Stylization}, AUTHOR = {Kol, Timothy R. and Klehm, Oliver and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2016.2554114}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics}, VOLUME = {23}, NUMBER = {7}, PAGES = {1753--1766}, }
Endnote
%0 Journal Article %A Kol, Timothy R. %A Klehm, Oliver %A Seidel, Hans-Peter %A Eisemann, Elmar %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Expressive Single Scattering for Light Shaft Stylization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-64E7-2 %R 10.1109/TVCG.2016.2554114 %7 2016-04-14 %D 2017 %J IEEE Transactions on Visualization and Computer Graphics %V 23 %N 7 %& 1753 %P 1753 - 1766 %I IEEE Computer Society %C New York, NY %@ false
Masia, B., Serrano, A., and Gutierrez, D. 2017. Dynamic Range Expansion Based on Image Statistics. Multimedia Tools and Applications 76, 1.
Export
BibTeX
@article{RTM_MMTA2015, TITLE = {Dynamic Range Expansion Based on Image Statistics}, AUTHOR = {Masia, Belen and Serrano, Ana and Gutierrez, Diego}, LANGUAGE = {eng}, ISSN = {1380-7501}, DOI = {10.1007/s11042-015-3036-0}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Multimedia Tools and Applications}, VOLUME = {76}, NUMBER = {1}, PAGES = {631--648}, }
Endnote
%0 Journal Article %A Masia, Belen %A Serrano, Ana %A Gutierrez, Diego %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Dynamic Range Expansion Based on Image Statistics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-78ED-4 %R 10.1007/s11042-015-3036-0 %7 2015-11-17 %D 2017 %J Multimedia Tools and Applications %V 76 %N 1 %& 631 %P 631 - 648 %I Springer %C New York, NY %@ false
Mehta, D., Sridhar, S., Sotnychenko, O., et al. VNect: Real-Time 3D Human-Pose Estimation With a Single RGB Camera. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2017) 36, 4.
(Accepted/in press)
Export
BibTeX
@article{MehtaSIGGRAPH2017, TITLE = {{VNect}: {R}eal-Time {3D} Human-Pose Estimation With a Single {RGB} Camera}, AUTHOR = {Mehta, Dushyant and Sridhar, Srinath and Sotnychenko, Oleksandr and Rhodin, Helge and Shafiei, Mohammad and Seidel, Hans-Peter and Xu, Weipeng and Casas, Dan and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2017}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {36}, NUMBER = {4}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2017}, }
Endnote
%0 Journal Article %A Mehta, Dushyant %A Sridhar, Srinath %A Sotnychenko, Oleksandr %A Rhodin, Helge %A Shafiei, Mohammad %A Seidel, Hans-Peter %A Xu, Weipeng %A Casas, Dan %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T VNect: Real-Time 3D Human-Pose Estimation With a Single RGB Camera : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-7D95-0 %D 2017 %J ACM Transactions on Graphics %V 36 %N 4 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2017 %O ACM SIGGRAPH 2017 Los Angeles, California, 30 July - 3 August
Mehta, D., Sridhar, S., Sotnychenko, O., et al. 2017. VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera. http://arxiv.org/abs/1705.01583.
(arXiv: 1705.01583)
Abstract
We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.
Export
BibTeX
@online{MehtaArXiv2017, TITLE = {{VNect}: Real-time {3D} Human Pose Estimation with a Single {RGB} Camera}, AUTHOR = {Mehta, Dushyant and Sridhar, Srinath and Sotnychenko, Oleksandr and Rhodin, Helge and Shafiei, Mohammad and Seidel, Hans-Peter and Xu, Weipeng and Casas, Dan and Theobalt, Christian}, URL = {http://arxiv.org/abs/1705.01583}, DOI = {10.1145/3072959.3073596}, EPRINT = {1705.01583}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.}, }
Endnote
%0 Report %A Mehta, Dushyant %A Sridhar, Srinath %A Sotnychenko, Oleksandr %A Rhodin, Helge %A Shafiei, Mohammad %A Seidel, Hans-Peter %A Xu, Weipeng %A Casas, Dan %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera : %U http://hdl.handle.net/11858/00-001M-0000-002D-7D78-3 %R 10.1145/3072959.3073596 %U http://arxiv.org/abs/1705.01583 %D 2017 %X We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
Molnos, S., Mamdouh, T., Petri, S., Nocke, T., Weinkauf, T., and Coumou, D. 2017. A Network-based Detection Scheme for the Jet Stream Core. Earth System Dynamics 8, 1.
Export
BibTeX
@article{Molnos2017, TITLE = {A Network-based Detection Scheme for the Jet Stream Core}, AUTHOR = {Molnos, Sonja and Mamdouh, Tarek and Petri, Stefan and Nocke, Thomas and Weinkauf, Tino and Coumou, Dim}, LANGUAGE = {eng}, ISSN = {2190-4979}, DOI = {10.5194/esd-8-75-2017}, PUBLISHER = {Copernicus GmbH}, ADDRESS = {New York}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Earth System Dynamics}, VOLUME = {8}, NUMBER = {1}, PAGES = {75--89}, }
Endnote
%0 Journal Article %A Molnos, Sonja %A Mamdouh, Tarek %A Petri, Stefan %A Nocke, Thomas %A Weinkauf, Tino %A Coumou, Dim %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T A Network-based Detection Scheme for the Jet Stream Core : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-CBEC-2 %R 10.5194/esd-8-75-2017 %7 2017 %D 2017 %J Earth System Dynamics %O Earth Syst. Dyn. %V 8 %N 1 %& 75 %P 75 - 89 %I Copernicus GmbH %C New York %@ false
Nalbach, O., Seidel, H.-P., and Ritschel, T. 2017. Practical Capture and Reproduction of Phosphorescent Appearance. Computer Graphics Forum (Proc. EUROGRAPHICS 2017) 36, 2.
Export
BibTeX
@article{Nalbach2017, TITLE = {Practical Capture and Reproduction of Phosphorescent Appearance}, AUTHOR = {Nalbach, Oliver and Seidel, Hans-Peter and Ritschel, Tobias}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.13136}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {36}, NUMBER = {2}, PAGES = {409--420}, BOOKTITLE = {The European Association for Computer Graphics 38th Annual Conference (EUROGRAPHICS 2017)}, }
Endnote
%0 Journal Article %A Nalbach, Oliver %A Seidel, Hans-Peter %A Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Practical Capture and Reproduction of Phosphorescent Appearance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4A53-9 %R 10.1111/cgf.13136 %7 2017 %D 2017 %J Computer Graphics Forum %V 36 %N 2 %& 409 %P 409 - 420 %I Wiley-Blackwell %C Oxford %@ false %B The European Association for Computer Graphics 38th Annual Conference %O EUROGRAPHICS 2017 Lyon, France, 24-28 April 2017 EG 2017
Pishchulin, L., Wuhrer, S., Helten, T., Theobalt, C., and Schiele, B. 2017. Building Statistical Shape Spaces for 3D Human Modeling. Pattern Recognition 67.
Export
BibTeX
@article{Pishchulin2017, TITLE = {Building statistical shape spaces for {3D} human modeling}, AUTHOR = {Pishchulin, Leonid and Wuhrer, Stefanie and Helten, Thomas and Theobalt, Christian and Schiele, Bernt}, LANGUAGE = {eng}, ISSN = {0031-3203}, DOI = {10.1016/j.patcog.2017.02.018}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Pattern Recognition}, VOLUME = {67}, PAGES = {276--286}, }
Endnote
%0 Journal Article %A Pishchulin, Leonid %A Wuhrer, Stefanie %A Helten, Thomas %A Theobalt, Christian %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Building Statistical Shape Spaces for 3D Human Modeling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-3E3D-E %R 10.1016/j.patcog.2017.02.018 %7 2017-02-20 %D 2017 %J Pattern Recognition %O Pattern Recognit. %V 67 %& 276 %P 276 - 286 %I Elsevier %C Amsterdam %@ false
Robertini, N., Casas, D., de Aguiar, E., and Theobalt, C. 2017. Multi-view Performance Capture of Surface Details. International Journal of Computer Vision First Online.
Export
BibTeX
@article{Robertini2017, TITLE = {Multi-view Performance Capture of Surface Details}, AUTHOR = {Robertini, Nadia and Casas, Dan and de Aguiar, Edilson and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0920-5691}, DOI = {10.1007/s11263-016-0979-1}, PUBLISHER = {Kluwer Academic Publishers}, ADDRESS = {Hingham, Mass.}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, JOURNAL = {International Journal of Computer Vision}, VOLUME = {First Online}, }
Endnote
%0 Journal Article %A Robertini, Nadia %A Casas, Dan %A de Aguiar, Edilson %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Multi-view Performance Capture of Surface Details : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4A89-2 %R 10.1007/s11263-016-0979-1 %7 2017-01-21 %D 2017 %8 21.01.2017 %J International Journal of Computer Vision %O Int. J. Comput. Vis. %V First Online %I Kluwer Academic Publishers %C Hingham, Mass. %@ false
Saikia, H., Seidel, H.-P., and Weinkauf, T. 2017. Fast Similarity Search in Scalar Fields using Merging Histograms. In: Topological Methods in Data Analysis and Visualization IV. Springer, Cham.
Export
BibTeX
@incollection{Saikia_Seidel_Weinkauf2017, TITLE = {Fast Similarity Search in Scalar Fields using Merging Histograms}, AUTHOR = {Saikia, Himangshu and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-3-319-44682-0}, DOI = {10.1007/978-3-319-44684-4_7}, PUBLISHER = {Springer}, ADDRESS = {Cham}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Topological Methods in Data Analysis and Visualization IV}, EDITOR = {Carr, Hamish and Garth, Christoph and Weinkauf, Tino}, PAGES = {121--134}, SERIES = {Mathematics and Visualization}, }
Endnote
%0 Book Section %A Saikia, Himangshu %A Seidel, Hans-Peter %A Weinkauf, Tino %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Fast Similarity Search in Scalar Fields using Merging Histograms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-772A-0 %R 10.1007/978-3-319-44684-4_7 %D 2017 %B Topological Methods in Data Analysis and Visualization IV %E Carr, Hamish; Garth, Christoph; Weinkauf, Tino %P 121 - 134 %I Springer %C Cham %@ 978-3-319-44682-0 %S Mathematics and Visualization
Sridhar, S., Markussen, A., Oulasvirta, A., Theobalt, C., and Boring, S. WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor. CHI 2017, 35th Annual ACM Conference on Human Factors in Computing Systems, ACM.
(Accepted/in press)
Export
BibTeX
@inproceedings{WatchSense_CHI2017, TITLE = {{WatchSense}: {O}n- and Above-Skin Input Sensing through a Wearable Depth Sensor}, AUTHOR = {Sridhar, Srinath and Markussen, Anders and Oulasvirta, Antti and Theobalt, Christian and Boring, Sebastian}, LANGUAGE = {eng}, PUBLISHER = {ACM}, YEAR = {2017}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {CHI 2017, 35th Annual ACM Conference on Human Factors in Computing Systems}, ADDRESS = {Denver, CO, USA}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A Markussen, Anders %A Oulasvirta, Antti %A Theobalt, Christian %A Boring, Sebastian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6517-8 %D 2016 %8 14.12.2016 %B 35th Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2017-05-06 - 2017-05-11 %C Denver, CO, USA %B CHI 2017 %I ACM
Sridhar, S., Markussen, A., Oulasvirta, A., Theobalt, C., and Boring, S. 2017. WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.
Export
BibTeX
@techreport{sridharwatch17, TITLE = {{WatchSense}: On- and Above-Skin Input Sensing through a Wearable Depth Sensor}, AUTHOR = {Sridhar, Srinath and Markussen, Anders and Oulasvirta, Antti and Theobalt, Christian and Boring, Sebastian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Sridhar, Srinath %A Markussen, Anders %A Oulasvirta, Antti %A Theobalt, Christian %A Boring, Sebastian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-402E-D %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2017 %P 17 p. %X This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications. %B Research Report %@ false
Steinberger, M., Zayer, R., and Seidel, H.-P. 2017. Globally Homogeneous, Locally Adaptive Sparse Matrix-Vector Multiplication on the GPU. ICS 2017, International Conference on Supercomputing, ACM.
Export
BibTeX
@inproceedings{SteinbergerICS2017, TITLE = {Globally Homogeneous, Locally Adaptive Sparse Matrix-Vector Multiplication on the {GPU}}, AUTHOR = {Steinberger, Markus and Zayer, Rhaleb and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4503-5020-4}, DOI = {10.1145/3079079.3079086}, PUBLISHER = {ACM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {ICS 2017, International Conference on Supercomputing}, EID = {13}, ADDRESS = {Chicago, IL, USA}, }
Endnote
%0 Conference Proceedings %A Steinberger, Markus %A Zayer, Rhaleb %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Globally Homogeneous, Locally Adaptive Sparse Matrix-Vector Multiplication on the GPU : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-7D71-2 %R 10.1145/3079079.3079086 %D 2017 %B International Conference on Supercomputing %Z date of event: 2017-06-13 - 2017-06-16 %C Chicago, IL, USA %B ICS 2017 %Z sequence number: 13 %I ACM %@ 978-1-4503-5020-4
Weier, M., Stengel, M., Roth, T., et al. Perception-driven Accelerated Rendering. Computer Graphics Forum (Proc. EUROGRAPHICS 2017) 36, 2.
(Accepted/in press)
Export
BibTeX
@article{WeierEG2017STAR, TITLE = {Perception-driven Accelerated Rendering}, AUTHOR = {Weier, Martin and Stengel, Michael and Roth, Thorsten and Didyk, Piotr and Eisemann, Elmar and Eisemann, Martin and Grogorick, Steve and Hinkenjann, Andr{\'e} and Krujiff, Elmar and Magnor, Marcus A. and Myszkowski, Karol and Slusallek, Philipp}, LANGUAGE = {eng}, ISSN = {0167-7055}, PUBLISHER = {Blackwell-Wiley}, ADDRESS = {Oxford}, YEAR = {2017}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {36}, NUMBER = {2}, BOOKTITLE = {EUROGRAPHICS 2017 -- State of the Art Reports}, }
Endnote
%0 Journal Article %A Weier, Martin %A Stengel, Michael %A Roth, Thorsten %A Didyk, Piotr %A Eisemann, Elmar %A Eisemann, Martin %A Grogorick, Steve %A Hinkenjann, André %A Krujiff, Elmar %A Magnor, Marcus A. %A Myszkowski, Karol %A Slusallek, Philipp %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Perception-driven Accelerated Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-3496-8 %D 2017 %J Computer Graphics Forum %V 36 %N 2 %I Blackwell-Wiley %C Oxford %@ false %B EUROGRAPHICS 2017 - State of the Art Reports %O EUROGRAPHICS 2017 EUROGRAPHICS 2017 - STAR EG 2017 Lyon, France, 24-28 April 2017
Wu, X. 2017. Structure-aware Content Creation. urn:nbn:de:bsz:291-scidok-67750.
Abstract
Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.
Export
BibTeX
@phdthesis{wuphd2017, TITLE = {Structure-aware Content Creation}, AUTHOR = {Wu, Xiaokun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67750}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.}, }
Endnote
%0 Thesis %A Wu, Xiaokun %Y Seidel, Hans-Peter %A referee: Wand, Michael %A referee: Hildebrandt, Klaus %A referee: Klein, Reinhard %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Structure-aware Content Creation : Detection, Retargeting and Deformation %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8072-6 %U urn:nbn:de:bsz:291-scidok-67750 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P viii, 61 p. %V phd %9 phd %X Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6775/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Zayer, R., Steinberger, M., and Seidel, H.-P. 2017. A GPU-adapted Structure for Unstructured Grids. Computer Graphics Forum (Proc. EUROGRAPHICS 2017) 36, 2.
Export
BibTeX
@article{Zayer2017, TITLE = {A {GPU}-adapted Structure for Unstructured Grids}, AUTHOR = {Zayer, Rhaleb and Steinberger, Markus and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.13144}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {36}, NUMBER = {2}, PAGES = {495--507}, BOOKTITLE = {The European Association for Computer Graphics 38th Annual Conference (EUROGRAPHICS 2017)}, }
Endnote
%0 Journal Article %A Zayer, Rhaleb %A Steinberger, Markus %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A GPU-adapted Structure for Unstructured Grids : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5A05-7 %R 10.1111/cgf.13144 %7 2017 %D 2017 %J Computer Graphics Forum %V 36 %N 2 %& 495 %P 495 - 507 %I Wiley-Blackwell %C Oxford %@ false %B The European Association for Computer Graphics 38th Annual Conference %O EUROGRAPHICS 2017 Lyon, France, 24-28 April 2017 EG 2017
2016
Alvarez-Cortez, S., Kunkel, T., and Masia, B. 2016. Practical Low-Cost Recovery of Spectral Power Distributions. Computer Graphics Forum 35, 1.
Export
BibTeX
@article{MasiaCGF2016, TITLE = {Practical Low-Cost Recovery of Spectral Power Distributions}, AUTHOR = {Alvarez-Cortez, Sara and Kunkel, Timo and Masia, Belen}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12717}, PUBLISHER = {Wiley}, ADDRESS = {Chichester}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum}, VOLUME = {35}, NUMBER = {1}, PAGES = {166--178}, }
Endnote
%0 Journal Article %A Alvarez-Cortez, Sara %A Kunkel, Timo %A Masia, Belen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Practical Low-Cost Recovery of Spectral Power Distributions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-1A2F-4 %R 10.1111/cgf.12717 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 1 %& 166 %P 166 - 178 %I Wiley %C Chichester %@ false
Bachynskyi, M. 2016. Biomechanical Models for Human-Computer Interaction. urn:nbn:de:bsz:291-scidok-66888.
Abstract
Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.
Export
BibTeX
@phdthesis{Bachyphd16, TITLE = {Biomechanical Models for Human-Computer Interaction}, AUTHOR = {Bachynskyi, Myroslav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66888}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.}, }
Endnote
%0 Thesis %A Bachynskyi, Myroslav %Y Steimle, Jürgen %A referee: Schmidt, Albrecht %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Biomechanical Models for Human-Computer Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-0FD4-9 %U urn:nbn:de:bsz:291-scidok-66888 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xiv, 206 p. %V phd %9 phd %X Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6688/
Boechat, P., Dokter, M., Kenzel, M., Seidel, H.-P., Schmalstieg, D., and Steinberger, M. 2016. Representing and Scheduling Procedural Generation using Operator Graphs. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{BoaechatSIGGRAPHAsia2016, TITLE = {Representing and Scheduling Procedural Generation using Operator Graphs}, AUTHOR = {Boechat, Pedro and Dokter, Mark and Kenzel, Michael and Seidel, Hans-Peter and Schmalstieg, Dieter and Steinberger, Markus}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2980179.2980227}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {183}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Boechat, Pedro %A Dokter, Mark %A Kenzel, Michael %A Seidel, Hans-Peter %A Schmalstieg, Dieter %A Steinberger, Markus %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Representing and Scheduling Procedural Generation using Operator Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-98BB-0 %R 10.1145/2980179.2980227 %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 183 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Brandt, C., von Tycowicz, C., and Hildebrandt, K. 2016. Geometric Flows of Curves in Shape Space for Processing Motion of Deformable Objects. Computer Graphics Forum (Proc. EUROGRAPHICS 2016) 35, 2.
Export
BibTeX
@article{Hildebrandt_EG2016, TITLE = {Geometric Flows of Curves in Shape Space for Processing Motion of Deformable Objects}, AUTHOR = {Brandt, Christopher and von Tycowicz, Christoph and Hildebrandt, Klaus}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12832}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {35}, NUMBER = {2}, PAGES = {295--305}, BOOKTITLE = {The European Association for Computer Graphics 37th Annual Conference (EUROGRAPHICS 2016)}, }
Endnote
%0 Journal Article %A Brandt, Christopher %A von Tycowicz, Christoph %A Hildebrandt, Klaus %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Geometric Flows of Curves in Shape Space for Processing Motion of Deformable Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-D22B-8 %R 10.1111/cgf.12832 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 2 %& 295 %P 295 - 305 %I Wiley-Blackwell %C Oxford %@ false %B The European Association for Computer Graphics 37th Annual Conference %O EUROGRAPHICS 2016 Lisbon, Portugal, 9th-13th May 2016 EG 2016
Calagari, K., Elgamal, T., Diab, K., et al. 2016. Depth Personalization and Streaming of Stereoscopic Sports Videos. ACM Transactions on Multimedia Computing, Communications, and Applications 12, 3.
Export
BibTeX
@article{CalagariTMC2016, TITLE = {Depth Personalization and Streaming of Stereoscopic Sports Videos}, AUTHOR = {Calagari, Kiana and Elgamal, Tarek and Diab, Khaled and Templin, Krzysztof and Didyk, Piotr and Matusik, Wojciech and Hefeeda, Mohamed}, LANGUAGE = {eng}, DOI = {10.1145/2890103}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Multimedia Computing, Communications, and Applications}, VOLUME = {12}, NUMBER = {3}, EID = {41}, }
Endnote
%0 Journal Article %A Calagari, Kiana %A Elgamal, Tarek %A Diab, Khaled %A Templin, Krzysztof %A Didyk, Piotr %A Matusik, Wojciech %A Hefeeda, Mohamed %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Depth Personalization and Streaming of Stereoscopic Sports Videos : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-079A-B %R 10.1145/2890103 %7 2016 %D 2016 %J ACM Transactions on Multimedia Computing, Communications, and Applications %O TOMM %V 12 %N 3 %Z sequence number: 41 %I ACM %C New York, NY
Chen, R. and Gotsman, C. 2016a. Complex Transfinite Barycentric Mappings with Similarity Kernels. Computer Graphics Forum (Proc. Eurographics Symposium on Geometric Processing 2016) 35, 5.
Export
BibTeX
@article{ChenSGP2016, TITLE = {Complex Transfinite Barycentric Mappings with Similarity Kernels}, AUTHOR = {Chen, Renjie and Gotsman, Craig}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.1296}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Chichester}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Geometric Processing)}, VOLUME = {35}, NUMBER = {5}, PAGES = {51--53}, BOOKTITLE = {Symposium on Geometry Processing 2016 (Eurographics Symposium on Geometric Processing 2016)}, EDITOR = {Ovsjanikov, Maks and Panozzo, Daniele}, }
Endnote
%0 Journal Article %A Chen, Renjie %A Gotsman, Craig %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Complex Transfinite Barycentric Mappings with Similarity Kernels : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-430B-5 %R 10.1111/cgf.1296 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 5 %& 51 %P 51 - 53 %I Wiley-Blackwell %C Chichester %@ false %B Symposium on Geometry Processing 2016 %O Berlin, Germany ; June 20 - 24, 2016 SGP 2016 Eurographics Symposium on Geometric Processing 2016
Chen, R. and Gotsman, C. 2016b. On Pseudo-harmonic Barycentric Coordinates. Computer Aided Geometric Design 44.
Export
BibTeX
@article{Chen_Gotsman2016, TITLE = {On Pseudo-harmonic Barycentric Coordinates}, AUTHOR = {Chen, Renjie and Gotsman, Craig}, LANGUAGE = {eng}, ISSN = {0167-8396}, DOI = {10.1016/j.cagd.2016.04.005}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Aided Geometric Design}, VOLUME = {44}, PAGES = {15--35}, }
Endnote
%0 Journal Article %A Chen, Renjie %A Gotsman, Craig %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T On Pseudo-harmonic Barycentric Coordinates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-05AD-6 %R 10.1016/j.cagd.2016.04.005 %7 2016 %D 2016 %J Computer Aided Geometric Design %V 44 %& 15 %P 15 - 35 %I Elsevier %C Amsterdam %@ false
Chen, R. and Gotsman, C. 2016c. Generalized As-Similar-As-Possible Warping with Applications in Digital Photography. Computer Graphics Forum (Proc. EUROGRAPHICS 2016) 35, 2.
Export
BibTeX
@article{ChenEG2016, TITLE = {Generalized As-Similar-As-Possible Warping with Applications in Digital Photography}, AUTHOR = {Chen, Renjie and Gotsman, Craig}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12813}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {35}, NUMBER = {2}, PAGES = {81--92}, BOOKTITLE = {The European Association for Computer Graphics 37th Annual Conference (EUROGRAPHICS 2016)}, }
Endnote
%0 Journal Article %A Chen, Renjie %A Gotsman, Craig %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Generalized As-Similar-As-Possible Warping with Applications in Digital Photography : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-8BBD-4 %R 10.1111/cgf.12813 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 2 %& 81 %P 81 - 92 %I Wiley-Blackwell %C Oxford %@ false %B The European Association for Computer Graphics 37th Annual Conference %O EUROGRAPHICS 2016 Lisbon, Portugal, 9th-13th May 2016 EG 2016
Chien, E., Chen, R., and Weber, O. 2016. Bounded Distortion Harmonic Shape Interpolation. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{ChienSIGGRAPH2016, TITLE = {Bounded Distortion Harmonic Shape Interpolation}, AUTHOR = {Chien, Edward and Chen, Renjie and Weber, Ofir}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925926}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {105}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Chien, Edward %A Chen, Renjie %A Weber, Ofir %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Bounded Distortion Harmonic Shape Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0793-A %R 10.1145/2897824.2925926 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 105 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Dąbała, Ł., Ziegler, M., Didyk, P., et al. 2016. Efficient Multi-image Correspondences for On-line Light Field Video Processing. Computer Graphics Forum (Proc. Pacific Graphics 2016) 35, 7.
Export
BibTeX
@article{DabalaPG2016, TITLE = {Efficient Multi-image Correspondences for On-line Light Field Video Processing}, AUTHOR = {D{\c a}ba{\l}a, {\L}ukasz and Ziegler, Matthias and Didyk, Piotr and Zilly, Frederik and Keinert, Joachim and Myszkowski, Karol and Rokita, Przemyslaw and Ritschel, Tobias}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.13037}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {35}, NUMBER = {7}, PAGES = {401--410}, BOOKTITLE = {The 24th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2016)}, }
Endnote
%0 Journal Article %A Dąbała, Łukasz %A Ziegler, Matthias %A Didyk, Piotr %A Zilly, Frederik %A Keinert, Joachim %A Myszkowski, Karol %A Rokita, Przemyslaw %A Ritschel, Tobias %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Multi-image Correspondences for On-line Light Field Video Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82BA-5 %R 10.1111/cgf.13037 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 7 %& 401 %P 401 - 410 %I Wiley-Blackwell %C Oxford, UK %@ false %B The 24th Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2016 PG 2016
Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., and Theobalt, C. 2016. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration. http://arxiv.org/abs/1604.01093.
(arXiv: 1604.01093)
Abstract
Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.
Export
BibTeX
@online{DaiarXiv1604.01093, TITLE = {{BundleFusion}: {R}eal-time Globally Consistent {3D} Reconstruction using On-the-fly Surface Re-integration}, AUTHOR = {Dai, Angela and Nie{\ss}ner, Matthias and Zollh{\"o}fer, Michael and Izadi, Shahram and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1604.01093}, EPRINT = {1604.01093}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.}, }
Endnote
%0 Report %A Dai, Angela %A Nießner, Matthias %A Zollhöfer, Michael %A Izadi, Shahram %A Theobalt, Christian %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A9F-2 %U http://arxiv.org/abs/1604.01093 %D 2016 %X Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results. %K Computer Science, Graphics, cs.GR,Computer Science, Computer Vision and Pattern Recognition, cs.CV
DeVito, Z., Mara, M., Zollhöfer, M., et al. 2016. Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging. http://arxiv.org/abs/1604.06525.
(arXiv: 1604.06525)
Abstract
Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt (available under http://optlang.org), in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1-2 orders of magnitude beyond a general-purpose auto-generated solver.
Export
BibTeX
@online{DeVito1604.06525, TITLE = {Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging}, AUTHOR = {DeVito, Zachary and Mara, Michael and Zollh{\"o}fer, Michael and Bernstein, Gilbert and Ragan-Kelley, Jonathan and Theobalt, Christian and Hanrahan, Pat and Fisher, Matthew and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1604.06525}, EPRINT = {1604.06525}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt (available under http://optlang.org), in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1-2 orders of magnitude beyond a general-purpose auto-generated solver.}, }
Endnote
%0 Report %A DeVito, Zachary %A Mara, Michael %A Zollhöfer, Michael %A Bernstein, Gilbert %A Ragan-Kelley, Jonathan %A Theobalt, Christian %A Hanrahan, Pat %A Fisher, Matthew %A Nießner, Matthias %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9AA6-0 %U http://arxiv.org/abs/1604.06525 %D 2016 %X Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt (available under http://optlang.org), in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1-2 orders of magnitude beyond a general-purpose auto-generated solver. %K Computer Science, Graphics, cs.GR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Programming Languages, cs.PL
Efrat, N., Didyk, P., Foshey, M., Matusik, W., and Levin, A. 2016. Cinema 3D: Large Scale Automultiscopic Display. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{EfratSIGGRAPH2016, TITLE = {Cinema {3D}: {L}arge Scale Automultiscopic Display}, AUTHOR = {Efrat, Netalee and Didyk, Piotr and Foshey, Mike and Matusik, Wojciech and Levin, Anat}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925921}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {59}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Efrat, Netalee %A Didyk, Piotr %A Foshey, Mike %A Matusik, Wojciech %A Levin, Anat %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Cinema 3D: Large Scale Automultiscopic Display : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0189-5 %R 10.1145/2897824.2925921 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 59 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Elek, O. 2016. Efficient Methods for Physically-based Rendering of Participating Media. urn:nbn:de:bsz:291-scidok-65357.
Export
BibTeX
@phdthesis{ElekPhD2016, TITLE = {Efficient Methods for Physically-based Rendering of Participating Media}, AUTHOR = {Elek, Oskar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65357}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Elek, Oskar %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %A referee: Dachsbacher, Karsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Efficient Methods for Physically-based Rendering of Participating Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F94D-E %U urn:nbn:de:bsz:291-scidok-65357 %I Universität des Saarlandes %C Saarbrücken %D 2016 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6535/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Garrido, P., Zollhöfer, M., Wu, C., et al. 2016a. Corrective 3D Reconstruction of Lips from Monocular Video. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{Garrido2016SGA, TITLE = {Corrective {3D} Reconstruction of Lips from Monocular Video}, AUTHOR = {Garrido, Pablo and Zollh{\"o}fer, Michael and Wu, Chenglei and Bradley, Derek and Perez, Patrick and Beeler, Thabo and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {219}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Garrido, Pablo %A Zollhöfer, Michael %A Wu, Chenglei %A Bradley, Derek %A Perez, Patrick %A Beeler, Thabo %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Corrective 3D Reconstruction of Lips from Monocular Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-23CE-F %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 219 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Garrido, P., Zollhöfer, M., Casas, D., et al. 2016b. Reconstruction of Personalized 3D Face Rigs from Monocular Video. ACM Transactions on Graphics 35, 3.
Export
BibTeX
@article{GarridoTOG2016, TITLE = {Reconstruction of Personalized 3{D} Face Rigs from Monocular Video}, AUTHOR = {Garrido, Pablo and Zollh{\"o}fer, Michael and Casas, Dan and Valgaerts, Levi and Varanasi, Kiran and P{\'e}rez, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2890493}, PUBLISHER = {Association for Computing Machinery}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {35}, NUMBER = {3}, EID = {28}, }
Endnote
%0 Journal Article %A Garrido, Pablo %A Zollhöfer, Michael %A Casas, Dan %A Valgaerts, Levi %A Varanasi, Kiran %A Pérez, Patrick %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Reconstruction of Personalized 3D Face Rigs from Monocular Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F544-D %R 10.1145/2890493 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 3 %Z sequence number: 28 %I Association for Computing Machinery %C New York, NY %@ false
Garrido, P., Valgaerts, L., Rehmsen, O., Thormählen, T., Perez, P., and Theobalt, C. 2016c. Automatic Face Reenactment. http://arxiv.org/abs/1602.02651.
(arXiv: 1602.02651)
Abstract
We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.
Export
BibTeX
@online{GarridoarXiv1602.02651, TITLE = {Automatic Face Reenactment}, AUTHOR = {Garrido, Pablo and Valgaerts, Levi and Rehmsen, Ole and Thorm{\"a}hlen, Thorsten and Perez, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.02651}, EPRINT = {1602.02651}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.}, }
Endnote
%0 Report %A Garrido, Pablo %A Valgaerts, Levi %A Rehmsen, Ole %A Thormählen, Thorsten %A Perez, Patrick %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Automatic Face Reenactment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A53-8 %U http://arxiv.org/abs/1602.02651 %D 2016 %X We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
Georgoulis, S., Rematas, K., Ritschel, T., Fritz, M., Tuytelaars, T., and Van Gool, L. 2016. Natural Illumination from Multiple Materials Using Deep Learning. http://arxiv.org/abs/1611.09325.
(arXiv: 1611.09325)
Abstract
Recovering natural illumination from a single Low-Dynamic Range (LDR) image is a challenging task. To remedy this situation we exploit two properties often found in everyday images. First, images rarely show a single material, but rather multiple ones that all reflect the same illumination. However, the appearance of each material is observed only for some surface orientations, not all. Second, parts of the illumination are often directly observed in the background, without being affected by reflection. Typically, this directly observed part of the illumination is even smaller. We propose a deep Convolutional Neural Network (CNN) that combines prior knowledge about the statistics of illumination and reflectance with an input that makes explicit use of these two observations. Our approach maps multiple partial LDR material observations represented as reflectance maps and a background image to a spherical High-Dynamic Range (HDR) illumination map. For training and testing we propose a new data set comprising of synthetic and real images with multiple materials observed under the same illumination. Qualitative and quantitative evidence shows how both multi-material and using a background are essential to improve illumination estimations.
Export
BibTeX
@online{Fritzarxiv16, TITLE = {Natural Illumination from Multiple Materials Using Deep Learning}, AUTHOR = {Georgoulis, Stamatios and Rematas, Konstantinos and Ritschel, Tobias and Fritz, Mario and Tuytelaars, Tinne and Van Gool, Luc}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1611.09325}, EPRINT = {1611.09325}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Recovering natural illumination from a single Low-Dynamic Range (LDR) image is a challenging task. To remedy this situation we exploit two properties often found in everyday images. First, images rarely show a single material, but rather multiple ones that all reflect the same illumination. However, the appearance of each material is observed only for some surface orientations, not all. Second, parts of the illumination are often directly observed in the background, without being affected by reflection. Typically, this directly observed part of the illumination is even smaller. We propose a deep Convolutional Neural Network (CNN) that combines prior knowledge about the statistics of illumination and reflectance with an input that makes explicit use of these two observations. Our approach maps multiple partial LDR material observations represented as reflectance maps and a background image to a spherical High-Dynamic Range (HDR) illumination map. For training and testing we propose a new data set comprising of synthetic and real images with multiple materials observed under the same illumination. Qualitative and quantitative evidence shows how both multi-material and using a background are essential to improve illumination estimations.}, }
Endnote
%0 Report %A Georgoulis, Stamatios %A Rematas, Konstantinos %A Ritschel, Tobias %A Fritz, Mario %A Tuytelaars, Tinne %A Van Gool, Luc %+ External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Natural Illumination from Multiple Materials Using Deep Learning : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-270F-0 %U http://arxiv.org/abs/1611.09325 %D 2016 %X Recovering natural illumination from a single Low-Dynamic Range (LDR) image is a challenging task. To remedy this situation we exploit two properties often found in everyday images. First, images rarely show a single material, but rather multiple ones that all reflect the same illumination. However, the appearance of each material is observed only for some surface orientations, not all. Second, parts of the illumination are often directly observed in the background, without being affected by reflection. Typically, this directly observed part of the illumination is even smaller. We propose a deep Convolutional Neural Network (CNN) that combines prior knowledge about the statistics of illumination and reflectance with an input that makes explicit use of these two observations. Our approach maps multiple partial LDR material observations represented as reflectance maps and a background image to a spherical High-Dynamic Range (HDR) illumination map. For training and testing we propose a new data set comprising of synthetic and real images with multiple materials observed under the same illumination. Qualitative and quantitative evidence shows how both multi-material and using a background are essential to improve illumination estimations. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Groeger, D., Chong Loo, E., and Steimle, J. 2016. HotFlex: Post-print Customization of 3D Prints Using Embedded State Change. CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{Groeger_chi2016, TITLE = {{HotFlex}: {P}ost-print Customization of {3D} Prints Using Embedded State Change}, AUTHOR = {Groeger, Daniel and Chong Loo, Elena and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-3362-7}, DOI = {10.1145/2858036.2858191}, PUBLISHER = {ACM}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {420--432}, ADDRESS = {San Jose, CA, USA}, }
Endnote
%0 Conference Proceedings %A Groeger, Daniel %A Chong Loo, Elena %A Steimle, Jürgen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T HotFlex: Post-print Customization of 3D Prints Using Embedded State Change : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-07BA-3 %R 10.1145/2858036.2858191 %D 2016 %B 34th Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2016-05-07 - 2016-05-12 %C San Jose, CA, USA %B CHI 2016 %P 420 - 432 %I ACM %@ 978-1-4503-3362-7
Gryaditskaya, Y., Masia, B., Didyk, P., Myszkowski, K., and Seidel, H.-P. 2016. Gloss Editing in Light Fields. VMV 2016 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{jgryadit2016, TITLE = {Gloss Editing in Light Fields}, AUTHOR = {Gryaditskaya, Yulia and Masia, Belen and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-03868-025-3}, DOI = {10.2312/vmv.20161351}, PUBLISHER = {Eurographics Association}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {VMV 2016 Vision, Modeling and Visualization}, EDITOR = {Hullin, Matthias and Stamminger, Marc and Weinkauf, Tino}, PAGES = {127--135}, ADDRESS = {Bayreuth, Germany}, }
Endnote
%0 Conference Proceedings %A Gryaditskaya, Yulia %A Masia, Belen %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Gloss Editing in Light Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82C5-B %R 10.2312/vmv.20161351 %D 2016 %B 21st International Symposium on Vision, Modeling and Visualization %Z date of event: 2016-10-10 - 2016-10-12 %C Bayreuth, Germany %B VMV 2016 Vision, Modeling and Visualization %E Hullin, Matthias; Stamminger, Marc; Weinkauf, Tino %P 127 - 135 %I Eurographics Association %@ 978-3-03868-025-3
Hanka, A. 2016. Material Appearance Editing in Complex Volume and Surface Renderings. .
Abstract
When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.
Export
BibTeX
@mastersthesis{HankaMSc2016, TITLE = {Material Appearance Editing in Complex Volume and Surface Renderings}, AUTHOR = {Hanka, Adam}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-03-31}, ABSTRACT = {When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.}, }
Endnote
%0 Thesis %A Hanka, Adam %Y Ritschel, Tobias %A referee: Slusallek, Philipp %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Material Appearance Editing in Complex Volume and Surface Renderings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41E0-8 %I Universität des Saarlandes %C Saarbrücken %D 2016 %8 31.03.2016 %P 51 p. %V master %9 master %X When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.
Hatefi Ardakani, H. 2016. Finite Horizon Analysis of Markov Automata. urn:nbn:de:bsz:291-scidok-67438.
Abstract
Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.
Export
BibTeX
@phdthesis{Hatefiphd17, TITLE = {Finite Horizon Analysis of {M}arkov Automata}, AUTHOR = {Hatefi Ardakani, Hassan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67438}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.}, }
Endnote
%0 Thesis %A Hatefi Ardakani, Hassan %Y Hermanns, Holger %A referee: Buchholz, Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Finite Horizon Analysis of Markov Automata : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-9E81-C %U urn:nbn:de:bsz:291-scidok-67438 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P X, 175 p. %V phd %9 phd %X Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6743/
Havran, V., Filip, J., and Myszkowski, K. 2016. Perceptually Motivated BRDF Comparison using Single Image. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2016) 35, 4.
Export
BibTeX
@article{havran2016perceptually, TITLE = {Perceptually Motivated {BRDF} Comparison using Single Image}, AUTHOR = {Havran, Vlastimil and Filip, Jiri and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12944}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {35}, NUMBER = {4}, PAGES = {1--12}, BOOKTITLE = {Eurographics Symposium on Rendering 2016}, EDITOR = {Eisemann, Elmar and Fiume, Eugene}, }
Endnote
%0 Journal Article %A Havran, Vlastimil %A Filip, Jiri %A Myszkowski, Karol %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually Motivated BRDF Comparison using Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82C0-6 %R 10.1111/cgf.12944 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 4 %& 1 %P 1 - 12 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2016 %O Eurographics Symposium on Rendering 2016 EGSR 2016 Dublin, Ireland, 22-24 June 2016
Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., and Stamminger, M. 2016a. VolumeDeform: Real-time Volumetric Non-rigid Reconstruction. http://arxiv.org/abs/1603.08161.
(arXiv: 1603.08161)
Abstract
We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method does not require a pre-defined shape template to start with and builds up the scene model from scratch during the scanning process. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth-based constraint formulation. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera's capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features.
Export
BibTeX
@online{InnmannarXiv1603.08161, TITLE = {{VolumeDeform}: Real-time Volumetric Non-rigid Reconstruction}, AUTHOR = {Innmann, Matthias and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Theobalt, Christian and Stamminger, Marc}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1603.08161}, EPRINT = {1603.08161}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method does not require a pre-defined shape template to start with and builds up the scene model from scratch during the scanning process. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth-based constraint formulation. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera's capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features.}, }
Endnote
%0 Report %A Innmann, Matthias %A Zollhöfer, Michael %A Nießner, Matthias %A Theobalt, Christian %A Stamminger, Marc %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T VolumeDeform: Real-time Volumetric Non-rigid Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A8E-6 %U http://arxiv.org/abs/1603.08161 %D 2016 %X We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method does not require a pre-defined shape template to start with and builds up the scene model from scratch during the scanning process. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth-based constraint formulation. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera's capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., and Stamminger, M. 2016b. VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction. Computer Vision -- ECCV 2016, Springer.
Export
BibTeX
@inproceedings{InnmannECCV2016, TITLE = {{VolumeDeform}: {R}eal-Time Volumetric Non-rigid Reconstruction}, AUTHOR = {Innmann, Matthias and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Theobalt, Christian and Stamminger, Marc}, LANGUAGE = {eng}, ISBN = {978-3-319-46483-1}, DOI = {10.1007/978-3-319-46484-8_22}, PUBLISHER = {Springer}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Computer Vision -- ECCV 2016}, EDITOR = {Leibe, Bastian and Matas, Jiri and Sebe, Nicu and Welling, Max}, PAGES = {362--379}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9912}, ADDRESS = {Amsterdam, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Innmann, Matthias %A Zollhöfer, Michael %A Nießner, Matthias %A Theobalt, Christian %A Stamminger, Marc %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A41-0 %R 10.1007/978-3-319-46484-8_22 %D 2016 %B 14th European Conference on Computer Vision %Z date of event: 2016-10-11 - 2016-10-14 %C Amsterdam, The Netherlands %B Computer Vision -- ECCV 2016 %E Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max %P 362 - 379 %I Springer %@ 978-3-319-46483-1 %B Lecture Notes in Computer Science %N 9912
Kaaser, D., Mallmann-Trenn, F., and Natale, E. 2016. On the Voting Time of the Deterministic Majority Process. 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016), Schloss Dagstuhl.
Export
BibTeX
@inproceedings{KMN16, TITLE = {On the Voting Time of the Deterministic Majority Process}, AUTHOR = {Kaaser, Dominik and Mallmann-Trenn, Frederik and Natale, Emanuele}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-016-3}, URL = {urn:nbn:de:0030-drops-64675}, DOI = {10.4230/LIPIcs.MFCS.2016.55}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)}, EDITOR = {Sankowski, Piotr and Muscholl, Anca and NIedermeier, Rolf}, PAGES = {1--15}, EID = {55}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {58}, ADDRESS = {Krak{\'o}w, Poland}, }
Endnote
%0 Conference Proceedings %A Kaaser, Dominik %A Mallmann-Trenn, Frederik %A Natale, Emanuele %+ External Organizations External Organizations External Organizations %T On the Voting Time of the Deterministic Majority Process : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5E31-3 %U urn:nbn:de:0030-drops-64675 %R 10.4230/LIPIcs.MFCS.2016.55 %D 2016 %B 41st International Symposium on Mathematical Foundations of Computer Science %Z date of event: 2016-08-22 - 2016-08-26 %C Kraków, Poland %B 41st International Symposium on Mathematical Foundations of Computer Science %E Sankowski, Piotr; Muscholl, Anca; NIedermeier, Rolf %P 1 - 15 %Z sequence number: 55 %I Schloss Dagstuhl %@ 978-3-95977-016-3 %B Leibniz International Proceedings in Informatics %N 58 %@ false %U http://drops.dagstuhl.de/doku/urheberrecht1.htmlhttp://drops.dagstuhl.de/opus/volltexte/2016/6467/
Kellnhofer, P. 2016. Perceptual Modeling for Stereoscopic 3D. urn:nbn:de:bsz:291-scidok-66813.
Abstract
Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.
Export
BibTeX
@phdthesis{Kellnhoferphd2016, TITLE = {Perceptual Modeling for Stereoscopic {3D}}, AUTHOR = {Kellnhofer, Petr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66813}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.}, }
Endnote
%0 Thesis %A Kellnhofer, Petr %Y Myszkowski, Karol %A referee: Seidel, Hans-Peter %A referee: Masia, Belen %A referee: Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Perceptual Modeling for Stereoscopic 3D : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-BBA6-1 %U urn:nbn:de:bsz:291-scidok-66813 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xxiv, 158 p. %V phd %9 phd %X Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort. %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6681/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Kellnhofer, P., Didyk, P., Myszkowski, K., Hefeeda, M.M., Seidel, H.-P., and Matusik, W. 2016a. GazeStereo3D: Seamless Disparity Manipulations. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{KellnhoferSIGGRAPH2016, TITLE = {{GazeStereo3D}: {S}eamless Disparity Manipulations}, AUTHOR = {Kellnhofer, Petr and Didyk, Piotr and Myszkowski, Karol and Hefeeda, Mohamed M. and Seidel, Hans-Peter and Matusik, Wojciech}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925866}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {68}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Didyk, Piotr %A Myszkowski, Karol %A Hefeeda, Mohamed M. %A Seidel, Hans-Peter %A Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T GazeStereo3D: Seamless Disparity Manipulations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0190-4 %R 10.1145/2897824.2925866 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 68 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Kellnhofer, P., Didyk, P., Ritschel, T., Masia, B., Myszkowski, K., and Seidel, H.-P. 2016b. Motion Parallax in Stereo 3D: Model and Applications. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{Kellnhofer2016SGA, TITLE = {Motion Parallax in Stereo {3D}: {M}odel and Applications}, AUTHOR = {Kellnhofer, Petr and Didyk, Piotr and Ritschel, Tobias and Masia, Belen and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2980179.2980230}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {176}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Didyk, Piotr %A Ritschel, Tobias %A Masia, Belen %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Motion Parallax in Stereo 3D: Model and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B6-D %R 10.1145/2980179.2980230 %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 176 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2016c. Transformation-aware Perceptual Image Metric. Journal of Electronic Imaging 25, 5.
Export
BibTeX
@article{Kellnhofer2016jei, TITLE = {Transformation-aware Perceptual Image Metric}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1017-9909}, DOI = {10.1117/1.JEI.25.5.053014}, PUBLISHER = {SPIE}, ADDRESS = {Bellingham, WA}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Journal of Electronic Imaging}, VOLUME = {25}, NUMBER = {5}, EID = {053014}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Transformation-aware Perceptual Image Metric : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B3-4 %R 10.1117/1.JEI.25.5.053014 %7 2016 %D 2016 %J Journal of Electronic Imaging %V 25 %N 5 %Z sequence number: 053014 %I SPIE %C Bellingham, WA %@ false
Kerbl, B., Kenzel, M., Schmalstieg, D., Seidel, H.-P., and Steinberger, M. 2016. Hierarchical Bucket Queuing for Fine-Grained Priority Scheduling on the GPU. Computer Graphics Forum Early View.
Export
BibTeX
@article{Seidel_Steinberger2016, TITLE = {Hierarchical Bucket Queuing for Fine-Grained Priority Scheduling on the {GPU}}, AUTHOR = {Kerbl, Bernhard and Kenzel, Michael and Schmalstieg, Dieter and Seidel, Hans-Peter and Steinberger, Markus}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.13075}, PUBLISHER = {Blackwell-Wiley}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, JOURNAL = {Computer Graphics Forum}, VOLUME = {Early View}, PAGES = {1--17}, }
Endnote
%0 Journal Article %A Kerbl, Bernhard %A Kenzel, Michael %A Schmalstieg, Dieter %A Seidel, Hans-Peter %A Steinberger, Markus %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Hierarchical Bucket Queuing for Fine-Grained Priority Scheduling on the GPU : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-1823-8 %R 10.1111/cgf.13075 %7 2016-12-05 %D 2016 %8 05.12.2016 %J Computer Graphics Forum %O Computer Graphics Forum : journal of the European Association for Computer Graphics Comput. Graph. Forum %V Early View %& 1 %P 1 - 17 %I Blackwell-Wiley %C Oxford %@ false
Kim, H., Richardt, C., and Theobalt, C. 2016a. Video Depth-from-Defocus. Fourth International Conference on 3D Vision, IEEE Computer Society.
Export
BibTeX
@inproceedings{Kim3DV2016, TITLE = {Video Depth-from-Defocus}, AUTHOR = {Kim, Hyeongwoo and Richardt, Christian and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-5090-5407-7}, DOI = {10.1109/3DV.2016.46}, PUBLISHER = {IEEE Computer Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Fourth International Conference on 3D Vision}, PAGES = {370--379}, ADDRESS = {Stanford, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kim, Hyeongwoo %A Richardt, Christian %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society %T Video Depth-from-Defocus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-557E-5 %R 10.1109/3DV.2016.46 %D 2016 %B Fourth International Conference on 3D Vision %Z date of event: 2016-10-25 - 2016-10-28 %C Stanford, CA, USA %B Fourth International Conference on 3D Vision %P 370 - 379 %I IEEE Computer Society %@ 978-1-5090-5407-7
Kim, H., Richardt, C., and Theobalt, C. 2016b. Video Depth-From-Defocus. http://arxiv.org/abs/1610.03782.
(arXiv: 1610.03782)
Abstract
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
Export
BibTeX
@online{Kim1610.03782, TITLE = {Video Depth-From-Defocus}, AUTHOR = {Kim, Hyeongwoo and Richardt, Christian and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1610.03782}, EPRINT = {1610.03782}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.}, }
Endnote
%0 Report %A Kim, Hyeongwoo %A Richardt, Christian %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society %T Video Depth-From-Defocus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-B02D-7 %U http://arxiv.org/abs/1610.03782 %D 2016 %X Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2016c. Local High-order Regularization on Data Manifolds. http://arxiv.org/abs/1602.03805.
(arXiv: 1602.03805)
Abstract
The common graph Laplacian regularizer is well-established in semi-supervised learning and spectral dimensionality reduction. However, as a first-order regularizer, it can lead to degenerate functions in high-dimensional manifolds. The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems. We introduce a new regularizer which is globally high order and so does not suffer from the degeneracy of the graph Laplacian regularizer, but is also sparse for efficient computation in semi-supervised learning applications. We reduce computational complexity by building a local first-order approximation of the manifold as a surrogate geometry, and construct our high-order regularizer based on local derivative evaluations therein. Experiments on human body shape and pose analysis demonstrate the effectiveness and efficiency of our method.
Export
BibTeX
@online{Kim1602.03805, TITLE = {Local High-order Regularization on Data Manifolds}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, URL = {http://arxiv.org/abs/1602.03805}, EPRINT = {1602.03805}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The common graph Laplacian regularizer is well-established in semi-supervised learning and spectral dimensionality reduction. However, as a first-order regularizer, it can lead to degenerate functions in high-dimensional manifolds. The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems. We introduce a new regularizer which is globally high order and so does not suffer from the degeneracy of the graph Laplacian regularizer, but is also sparse for efficient computation in semi-supervised learning applications. We reduce computational complexity by building a local first-order approximation of the manifold as a surrogate geometry, and construct our high-order regularizer based on local derivative evaluations therein. Experiments on human body shape and pose analysis demonstrate the effectiveness and efficiency of our method.}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Local High-order Regularization on Data Manifolds : %U http://hdl.handle.net/11858/00-001M-0000-002C-2428-A %U http://arxiv.org/abs/1602.03805 %D 2016 %X The common graph Laplacian regularizer is well-established in semi-supervised learning and spectral dimensionality reduction. However, as a first-order regularizer, it can lead to degenerate functions in high-dimensional manifolds. The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems. We introduce a new regularizer which is globally high order and so does not suffer from the degeneracy of the graph Laplacian regularizer, but is also sparse for efficient computation in semi-supervised learning applications. We reduce computational complexity by building a local first-order approximation of the manifold as a surrogate geometry, and construct our high-order regularizer based on local derivative evaluations therein. Experiments on human body shape and pose analysis demonstrate the effectiveness and efficiency of our method. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2016d. Context-guided Diffusion for Label Propagation on Graphs. http://arxiv.org/abs/1602.06439.
(arXiv: 1602.06439)
Abstract
Existing approaches for diffusion on graphs, e.g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer. Inspired by the success of diffusivity tensors for anisotropic diffusion in image processing, we presents anisotropic diffusion on graphs and the corresponding label propagation algorithm. We develop positive definite diffusivity operators on the vector bundles of Riemannian manifolds, and discretize them to diffusivity operators on graphs. This enables us to easily define new robust diffusivity operators which significantly improve semi-supervised learning performance over existing diffusion algorithms.
Export
BibTeX
@online{KimarXiv1602.06439, TITLE = {Context-guided Diffusion for Label Propagation on Graphs}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.06439}, EPRINT = {1602.06439}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Existing approaches for diffusion on graphs, e.g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer. Inspired by the success of diffusivity tensors for anisotropic diffusion in image processing, we presents anisotropic diffusion on graphs and the corresponding label propagation algorithm. We develop positive definite diffusivity operators on the vector bundles of Riemannian manifolds, and discretize them to diffusivity operators on graphs. This enables us to easily define new robust diffusivity operators which significantly improve semi-supervised learning performance over existing diffusion algorithms.}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Context-guided Diffusion for Label Propagation on Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A84-9 %U http://arxiv.org/abs/1602.06439 %D 2016 %X Existing approaches for diffusion on graphs, e.g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer. Inspired by the success of diffusivity tensors for anisotropic diffusion in image processing, we presents anisotropic diffusion on graphs and the corresponding label propagation algorithm. We develop positive definite diffusivity operators on the vector bundles of Riemannian manifolds, and discretize them to diffusivity operators on graphs. This enables us to easily define new robust diffusivity operators which significantly improve semi-supervised learning performance over existing diffusion algorithms. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2016e. Semi-supervised Learning with Explicit Relationship Regularization. http://arxiv.org/abs/1602.03808.
(arXiv: 1602.03808)
Abstract
In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points. Existing approaches attempt to exploit this relationship information implicitly by enforcing smoothness on function evaluations only. However, what happens if we explicitly regularize the relationships between function evaluations? Inspired by homophily, we regularize based on a smooth relationship function, either defined from the data or with labels. In experiments, we demonstrate that this significantly improves the performance of state-of-the-art algorithms in semi-supervised classification and in spectral data embedding for constrained clustering and dimensionality reduction.
Export
BibTeX
@online{KimarXiv1602.03808, TITLE = {Semi-supervised Learning with Explicit Relationship Regularization}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.03808}, EPRINT = {1602.03808}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points. Existing approaches attempt to exploit this relationship information implicitly by enforcing smoothness on function evaluations only. However, what happens if we explicitly regularize the relationships between function evaluations? Inspired by homophily, we regularize based on a smooth relationship function, either defined from the data or with labels. In experiments, we demonstrate that this significantly improves the performance of state-of-the-art algorithms in semi-supervised classification and in spectral data embedding for constrained clustering and dimensionality reduction.}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Semi-supervised Learning with Explicit Relationship Regularization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A62-6 %U http://arxiv.org/abs/1602.03808 %D 2016 %X In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points. Existing approaches attempt to exploit this relationship information implicitly by enforcing smoothness on function evaluations only. However, what happens if we explicitly regularize the relationships between function evaluations? Inspired by homophily, we regularize based on a smooth relationship function, either defined from the data or with labels. In experiments, we demonstrate that this significantly improves the performance of state-of-the-art algorithms in semi-supervised classification and in spectral data embedding for constrained clustering and dimensionality reduction. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG
Klehm, O. 2016. User-Guided Scene Stylization using Efficient Rendering Techniques. urn:nbn:de:bsz:291-scidok-65321.
Export
BibTeX
@phdthesis{Klehmphd2016, TITLE = {User-Guided Scene Stylization using Efficient Rendering Techniques}, AUTHOR = {Klehm, Oliver}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65321}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Klehm, Oliver %Y Seidel, Hans-Peter %A referee: Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T User-Guided Scene Stylization using Efficient Rendering Techniques : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-9C13-A %U urn:nbn:de:bsz:291-scidok-65321 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XIII, 111 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6532/
Krafka, K., Khosla, A., Kellnhofer, P., et al. 2016. Eye Tracking for Everyone. 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), IEEE Computer Society.
Export
BibTeX
@inproceedings{KrafkaCVPR2016, TITLE = {Eye Tracking for Everyone}, AUTHOR = {Krafka, Kyle and Khosla, Aditya and Kellnhofer, Petr and Kannan, Harini and Bhandarkar, Suchendra and Matusik, Wojciech and Torralba, Antonio}, LANGUAGE = {eng}, ISBN = {978-1-4673-8851-1}, DOI = {10.1109/CVPR.2016.239}, PUBLISHER = {IEEE Computer Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016)}, PAGES = {2176--2184}, ADDRESS = {Las Vegas, NV, USA}, }
Endnote
%0 Conference Proceedings %A Krafka, Kyle %A Khosla, Aditya %A Kellnhofer, Petr %A Kannan, Harini %A Bhandarkar, Suchendra %A Matusik, Wojciech %A Torralba, Antonio %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Eye Tracking for Everyone : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8245-D %R 10.1109/CVPR.2016.239 %D 2016 %B 29th IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2016-06-26 - 2016-07-01 %C Las Vegas, NV, USA %B 29th IEEE Conference on Computer Vision and Pattern Recognition %P 2176 - 2184 %I IEEE Computer Society %@ 978-1-4673-8851-1 %U http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Krafka_Eye_Tracking_for_CVPR_2016_paper.pdf
Lavoué, G., Liu, H., Myszkowski, K., and Lin, W. 2016. Quality Assessment and Perception in Computer Graphics. IEEE Computer Graphics and Applications 36, 4.
Export
BibTeX
@article{Lavoue2016, TITLE = {Quality Assessment and Perception in Computer Graphics}, AUTHOR = {Lavou{\'e}, Guillaume and Liu, Hantao and Myszkowski, Karol and Lin, Weisi}, LANGUAGE = {eng}, ISSN = {0272-1716}, DOI = {10.1109/MCG.2016.72}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {IEEE Computer Graphics and Applications}, VOLUME = {36}, NUMBER = {4}, PAGES = {21--22}, }
Endnote
%0 Journal Article %A Lavoué, Guillaume %A Liu, Hantao %A Myszkowski, Karol %A Lin, Weisi %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Quality Assessment and Perception in Computer Graphics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8411-2 %R 10.1109/MCG.2016.72 %7 2016-07-29 %D 2016 %J IEEE Computer Graphics and Applications %V 36 %N 4 %& 21 %P 21 - 22 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Leimkühler, T., Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2016. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion. Graphics Interface 2016, 42nd Graphics Interface Conference, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{LeimkuehlerGI2016, TITLE = {Perceptual real-time {2D}-to-{3D} conversion using cue fusion}, AUTHOR = {Leimk{\"u}hler, Thomas and Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-0-9947868-1-4}, DOI = {10.20380/GI2016.02}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Graphics Interface 2016, 42nd Graphics Interface Conference}, EDITOR = {Popa, Tiberiu and Moffatt, Karyn}, PAGES = {5--12}, ADDRESS = {Victoria, Canada}, }
Endnote
%0 Conference Proceedings %A Leimkühler, Thomas %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-823D-1 %R 10.20380/GI2016.02 %D 2016 %B 42nd Graphics Interface Conference %Z date of event: 2016-06-01 - 2016-06-03 %C Victoria, Canada %B Graphics Interface 2016 %E Popa, Tiberiu; Moffatt, Karyn %P 5 - 12 %I Canadian Information Processing Society %@ 978-0-9947868-1-4
Lochmann, G., Reinert, B., Buchacher, A., and Ritschel, T. 2016. Real-time Novel-view Synthesis for Volume Rendering Using a Piecewise-analytic Representation. VMV 2016 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{Lochmann:2016:vmv, TITLE = {Real-time Novel-view Synthesis for Volume Rendering Using a Piecewise-analytic Representation}, AUTHOR = {Lochmann, Gerrit and Reinert, Bernhard and Buchacher, Arend and Ritschel, Tobias}, LANGUAGE = {eng}, ISBN = {978-3-03868-025-3}, DOI = {10.2312/vmv.20161346}, PUBLISHER = {Eurographics Association}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {VMV 2016 Vision, Modeling and Visualization}, EDITOR = {Hullin, Matthias and Stamminger, Marc and Weinkauf, Tino}, PAGES = {85--92}, ADDRESS = {Bayreuth, Germany}, }
Endnote
%0 Conference Proceedings %A Lochmann, Gerrit %A Reinert, Bernhard %A Buchacher, Arend %A Ritschel, Tobias %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Real-time Novel-view Synthesis for Volume Rendering Using a Piecewise-analytic Representation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-64EA-B %R 10.2312/vmv.20161346 %D 2016 %B 21st International Symposium on Vision, Modeling and Visualization %Z date of event: 2016-10-10 - 2016-10-12 %C Bayreuth, Germany %B VMV 2016 Vision, Modeling and Visualization %E Hullin, Matthias; Stamminger, Marc; Weinkauf, Tino %P 85 - 92 %I Eurographics Association %@ 978-3-03868-025-3
Lurie, K.L., Angst, R., Seibel, E.J., Liao, J.C., and Ellerbee Bowden, A.K. 2016. Registration of Free-hand OCT Daughter Endoscopy to 3D Organ Reconstruction. Biomedical Optics Express 7, 12.
Export
BibTeX
@article{Lurie2016, TITLE = {Registration of Free-hand {OCT} Daughter Endoscopy to {3D} Organ Reconstruction}, AUTHOR = {Lurie, Kristen L. and Angst, Roland and Seibel, Eric J. and Liao, Joseph C. and Ellerbee Bowden, Audrey K.}, LANGUAGE = {eng}, ISSN = {2156-7085}, DOI = {10.1364/BOE.7.004995}, PUBLISHER = {Optical Society of America}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Biomedical Optics Express}, VOLUME = {7}, NUMBER = {12}, PAGES = {4995--5009}, }
Endnote
%0 Journal Article %A Lurie, Kristen L. %A Angst, Roland %A Seibel, Eric J. %A Liao, Joseph C. %A Ellerbee Bowden, Audrey K. %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Registration of Free-hand OCT Daughter Endoscopy to 3D Organ Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-2F94-F %R 10.1364/BOE.7.004995 %7 2016 %D 2016 %J Biomedical Optics Express %V 7 %N 12 %& 4995 %P 4995 - 5009 %I Optical Society of America %@ false
Mantiuk, R.K. and Myszkowski, K. 2016. Perception-Inspired High Dynamic Range Video Coding and Compression. In: CHIPS 2020 VOL. 2. Springer, New York, NY.
Export
BibTeX
@incollection{Mantiuk_Chips2020, TITLE = {Perception-Inspired High Dynamic Range Video Coding and Compression}, AUTHOR = {Mantiuk, Rafa{\l} K. and Myszkowski, Karol}, LANGUAGE = {eng}, ISBN = {978-3-319-22092-5}, DOI = {10.1007/978-3-319-22093-2_14}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {CHIPS 2020 VOL. 2}, EDITOR = {Hoefflinger, Bernd}, PAGES = {211--220}, SERIES = {The Frontiers Collection}, }
Endnote
%0 Book Section %A Mantiuk, Rafał K. %A Myszkowski, Karol %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Perception-Inspired High Dynamic Range Video Coding and Compression : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-2DE8-3 %R 10.1007/978-3-319-22093-2_14 %D 2016 %B CHIPS 2020 VOL. 2 %E Hoefflinger, Bernd %P 211 - 220 %I Springer %C New York, NY %@ 978-3-319-22092-5 %S The Frontiers Collection
Meka, A., Zollhöfer, M., Richardt, C., and Theobalt, C. 2016. Live Intrinsic Video. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{MekaSIGGRAPH2016, TITLE = {Live Intrinsic Video}, AUTHOR = {Meka, Abhimitra and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925907}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {109}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Meka, Abhimitra %A Zollhöfer, Michael %A Richardt, Christian %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Live Intrinsic Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-07C8-3 %R 10.1145/2897824.2925907 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 109 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Moran, S. and Rashtchian, C. 2016. Shattered Sets and the Hilbert Function. 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016), Schloss Dagstuhl.
Export
BibTeX
@inproceedings{MoranMFCS2016, TITLE = {Shattered Sets and the {H}ilbert Function}, AUTHOR = {Moran, Shay and Rashtchian, Cyrus}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-016-3}, URL = {urn:nbn:de:0030-drops-64814}, DOI = {10.4230/LIPIcs.MFCS.2016.70}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)}, EDITOR = {Sankowski, Piotr and Muscholl, Anca and Niedermeier, Rolf}, PAGES = {1--14}, EID = {70}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {58}, ADDRESS = {Krak{\'o}w, Poland}, }
Endnote
%0 Conference Proceedings %A Moran, Shay %A Rashtchian, Cyrus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Shattered Sets and the Hilbert Function : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-51D6-E %U urn:nbn:de:0030-drops-64814 %R 10.4230/LIPIcs.MFCS.2016.70 %D 2016 %B 41st International Symposium on Mathematical Foundations of Computer Science %Z date of event: 2016-08-22 - 2016-08-26 %C Kraków, Poland %B 41st International Symposium on Mathematical Foundations of Computer Science %E Sankowski, Piotr; Muscholl, Anca; Niedermeier, Rolf %P 1 - 14 %Z sequence number: 70 %I Schloss Dagstuhl %@ 978-3-95977-016-3 %B Leibniz International Proceedings in Informatics %N 58 %@ false %U http://drops.dagstuhl.de/doku/urheberrecht1.htmlhttp://drops.dagstuhl.de/opus/volltexte/2016/6481/
Nalbach, O., Arabadzhiyska, E., Mehta, D., Seidel, H.-P., and Ritschel, T. 2016. Deep Shading: Convolutional Neural Networks for Screen-Space Shading. http://arxiv.org/abs/1603.06078.
(arXiv: 1603.06078)
Abstract
In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images.
Export
BibTeX
@online{NalbacharXiv2016, TITLE = {Deep Shading: Convolutional Neural Networks for Screen-Space Shading}, AUTHOR = {Nalbach, Oliver and Arabadzhiyska, Elena and Mehta, Dushyant and Seidel, Hans-Peter and Ritschel, Tobias}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1603.06078}, EPRINT = {1603.06078}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images.}, }
Endnote
%0 Report %A Nalbach, Oliver %A Arabadzhiyska, Elena %A Mehta, Dushyant %A Seidel, Hans-Peter %A Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Deep Shading: Convolutional Neural Networks for Screen-Space Shading : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0174-4 %U http://arxiv.org/abs/1603.06078 %D 2016 %X In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images. %K Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
Nittala, A.S. and Steimle, J. 2016. Digital Fabrication Pipeline for On-Body Sensors: Design Goals and Challenges. UbiComp’16 Adjunct, ACM.
Export
BibTeX
@inproceedings{NittalaUbiComp2016, TITLE = {Digital Fabrication Pipeline for On-Body Sensors: {D}esign Goals and Challenges}, AUTHOR = {Nittala, Aditya Shekhar and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-4462-3}, DOI = {10.1145/2968219.2979140}, PUBLISHER = {ACM}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {UbiComp'16 Adjunct}, PAGES = {950--953}, ADDRESS = {Heidelberg, Germany}, }
Endnote
%0 Conference Proceedings %A Nittala, Aditya Shekhar %A Steimle, Jürgen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Digital Fabrication Pipeline for On-Body Sensors: Design Goals and Challenges : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-989E-1 %R 10.1145/2968219.2979140 %D 2016 %B ACM International Joint Conference on Pervasive and Ubiquitous Computing %Z date of event: 2016-09-12 - 2016-09-16 %C Heidelberg, Germany %B UbiComp'16 Adjunct %P 950 - 953 %I ACM %@ 978-1-4503-4462-3
Pandey, A., Saxena, N., and Sinhababu, A. 2016. Algebraic Independence over Positive Characteristic: New Criterion and Applications to Locally Low Algebraic Rank Circuits. 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016), Schloss Dagstuhl.
Export
BibTeX
@inproceedings{pandey_et_al:LIPIcs:2016:6505, TITLE = {Algebraic Independence over Positive Characteristic: {N}ew Criterion and Applications to Locally Low Algebraic Rank Circuits}, AUTHOR = {Pandey, Anurag and Saxena, Nitin and Sinhababu, Amit}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-016-3}, URL = {urn:nbn:de:0030-drops-65057}, DOI = {10.4230/LIPIcs.MFCS.2016.74}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)}, EDITOR = {Sankowski, Piotr and Muscholl, Anca and Niedermeier, Rolf}, PAGES = {1--15}, EID = {74}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {58}, ADDRESS = {Krak{\'o}w, Poland}, }
Endnote
%0 Conference Proceedings %A Pandey, Anurag %A Saxena, Nitin %A Sinhababu, Amit %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Algebraic Independence over Positive Characteristic: New Criterion and Applications to Locally Low Algebraic Rank Circuits : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5895-6 %U urn:nbn:de:0030-drops-65057 %R 10.4230/LIPIcs.MFCS.2016.74 %D 2016 %B 41st International Symposium on Mathematical Foundations of Computer Science %Z date of event: 2016-08-22 - 2016-08-26 %C Kraków, Poland %B 41st International Symposium on Mathematical Foundations of Computer Science %E Sankowski, Piotr; Muscholl, Anca; Niedermeier, Rolf %P 1 - 15 %Z sequence number: 74 %I Schloss Dagstuhl %@ 978-3-95977-016-3 %B Leibniz International Proceedings in Informatics %N 58 %@ false %U http://drops.dagstuhl.de/doku/urheberrecht1.htmlhttp://drops.dagstuhl.de/opus/volltexte/2016/6505/
Piovarči, M., Levin, D.I.W., Rebello, J., et al. 2016. An Interaction-Aware, Perceptual Model for Non-Linear Elastic Objects. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{PiovarciSIGGRAPH2016, TITLE = {An Interaction-Aware, Perceptual Model for Non-Linear Elastic Objects}, AUTHOR = {Piovar{\v c}i, Michal and Levin, David I. W. and Rebello, Jason and Chen, Desai and {\v D}urikovi{\v c}, Roman and Pfister, Hanspeter and Matusik, Wojciech and Didyk, Piotr}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925885}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {55}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Piovarči, Michal %A Levin, David I. W. %A Rebello, Jason %A Chen, Desai %A Ďurikovič, Roman %A Pfister, Hanspeter %A Matusik, Wojciech %A Didyk, Piotr %+ External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T An Interaction-Aware, Perceptual Model for Non-Linear Elastic Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0187-9 %R 10.1145/2897824.2925885 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 55 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Pishchulin, L. 2016. Articulated People Detection and Pose Estimation in Challenging Real World Environments. urn:nbn:de:bsz:291-scidok-65478.
Export
BibTeX
@phdthesis{PishchulinPhD2016, TITLE = {Articulated People Detection and Pose Estimation in Challenging Real World Environments}, AUTHOR = {Pishchulin, Leonid}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65478}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Pishchulin, Leonid %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Articulated People Detection and Pose Estimation in Challenging Real World Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F000-B %U urn:nbn:de:bsz:291-scidok-65478 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XIII, 248 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6547/
Reinert, B., Kopf, J., Ritschel, T., Cuervo, E., Chu, D., and Seidel, H.-P. 2016a. Proxy-guided Image-based Rendering for Mobile Devices. Computer Graphics Forum (Proc. Pacific Graphics 2016) 35, 7.
Export
BibTeX
@article{ReinertPG2016, TITLE = {Proxy-guided Image-based Rendering for Mobile Devices}, AUTHOR = {Reinert, Bernhard and Kopf, Johannes and Ritschel, Tobias and Cuervo, Eduardo and Chu, David and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.13032}, PUBLISHER = {Blackwell-Wiley}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {35}, NUMBER = {7}, PAGES = {353--362}, BOOKTITLE = {The 24th Pacific Conference on Computer Graphics and Applications Short Papers Proceedings (Pacific Graphics 2016)}, }
Endnote
%0 Journal Article %A Reinert, Bernhard %A Kopf, Johannes %A Ritschel, Tobias %A Cuervo, Eduardo %A Chu, David %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Proxy-guided Image-based Rendering for Mobile Devices : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-2DD8-7 %R 10.1111/cgf.13032 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 7 %& 353 %P 353 - 362 %I Blackwell-Wiley %C Oxford %@ false %B The 24th Pacific Conference on Computer Graphics and Applications Short Papers Proceedings %O Pacific Graphics 2016 PG 2016
Reinert, B. 2016. Interactive, Example-driven Synthesis and Manipulation of Visual Media. urn:nbn:de:bsz:291-scidok-67660.
Export
BibTeX
@phdthesis{Reinertbphd17, TITLE = {Interactive, Example-driven Synthesis and Manipulation of Visual Media}, AUTHOR = {Reinert, Bernhard}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67660}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Reinert, Bernhard %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive, Example-driven Synthesis and Manipulation of Visual Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5A03-B %U urn:nbn:de:bsz:291-scidok-67660 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XX, 116, XVII p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6766/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Reinert, B., Ritschel, T., and Seidel, H.-P. 2016b. Animated 3D Creatures from Single-view Video by Skeletal Sketching. Graphics Interface 2016, 42nd Graphics Interface Conference, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{Reinert:2016:AnimatedCreatures, TITLE = {Animated {3D} Creatures from Single-view Video by Skeletal Sketching}, AUTHOR = {Reinert, Bernhard and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-0-9947868-1-4}, DOI = {10.20380/GI2016.17}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Graphics Interface 2016, 42nd Graphics Interface Conference}, EDITOR = {Popa, Tiberiu and Moffatt, Karyn}, PAGES = {133--143}, ADDRESS = {Victoria, BC, Canada}, }
Endnote
%0 Conference Proceedings %A Reinert, Bernhard %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Animated 3D Creatures from Single-view Video by Skeletal Sketching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-64EC-7 %R 10.20380/GI2016.17 %D 2016 %B 42nd Graphics Interface Conference %Z date of event: 2016-06-01 - 2016-06-03 %C Victoria, BC, Canada %B Graphics Interface 2016 %E Popa, Tiberiu; Moffatt, Karyn %P 133 - 143 %I Canadian Information Processing Society %@ 978-0-9947868-1-4
Reinert, B., Ritschel, T., Seidel, H.-P., and Georgiev, I. 2016c. Projective Blue-Noise Sampling. Computer Graphics Forum 35, 1.
Export
BibTeX
@article{ReinertCGF2016, TITLE = {Projective Blue-Noise Sampling}, AUTHOR = {Reinert, Bernhard and Ritschel, Tobias and Seidel, Hans-Peter and Georgiev, Iliyan}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12725}, PUBLISHER = {Wiley}, ADDRESS = {Chichester}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum}, VOLUME = {35}, NUMBER = {1}, PAGES = {285--295}, }
Endnote
%0 Journal Article %A Reinert, Bernhard %A Ritschel, Tobias %A Seidel, Hans-Peter %A Georgiev, Iliyan %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Projective Blue-Noise Sampling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-1A31-D %R 10.1111/cgf.12725 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 1 %& 285 %P 285 - 295 %I Wiley %C Chichester %@ false
Rematas, K., Nguyen, C., Ritschel, T., Fritz, M., and Tuytelaars, T. 2016. Novel Views of Objects from a Single Image. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Export
BibTeX
@article{rematas16tpami, TITLE = {Novel Views of Objects from a Single Image}, AUTHOR = {Rematas, Konstantinos and Nguyen, Chuong and Ritschel, Tobias and Fritz, Mario and Tuytelaars, Tinne}, LANGUAGE = {eng}, ISSN = {0162-8828}, DOI = {10.1109/TPAMI.2016.2601093}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, JOURNAL = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, PAGES = {1--14}, }
Endnote
%0 Journal Article %A Rematas, Konstantinos %A Nguyen, Chuong %A Ritschel, Tobias %A Fritz, Mario %A Tuytelaars, Tinne %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Novel Views of Objects from a Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-058A-1 %R 10.1109/TPAMI.2016.2601093 %7 2016 %D 2016 %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR %J IEEE Transactions on Pattern Analysis and Machine Intelligence %O IEEE Trans. Pattern Anal. Mach. Intell. %& 1 %P 1 - 14 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Rhodin, H. 2016. From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation. urn:nbn:de:bsz:291-scidok-67413.
Export
BibTeX
@phdthesis{RhodinPhD2016, TITLE = {From Motion Capture to Interactive Virtual Worlds: {T}owards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animatio}, AUTHOR = {Rhodin, Helge}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67413}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Rhodin, Helge %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %A referee: Bregler, Christoph %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6310-C %U urn:nbn:de:bsz:291-scidok-67413 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 179 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6741/
Rhodin, H., Richardt, C., Casas, D., et al. 2016a. EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{Rhodin2016SGA, TITLE = {{EgoCap}: {E}gocentric Marker-less Motion Capture with Two Fisheye Cameras}, AUTHOR = {Rhodin, Helge and Richardt, Christian and Casas, Dan and Insafutdinov, Eldar and Shafiei, Mohammad and Seidel, Hans-Peter and Schiele, Bernt and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {162}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Rhodin, Helge %A Richardt, Christian %A Casas, Dan %A Insafutdinov, Eldar %A Shafiei, Mohammad %A Seidel, Hans-Peter %A Schiele, Bernt %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8321-6 %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 162 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Rhodin, H., Robertini, N., Casas, D., Richardt, C., Seidel, H.-P., and Theobalt, C. 2016b. General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues. http://arxiv.org/abs/1607.08659.
(arXiv: 1607.08659)
Abstract
Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.
Export
BibTeX
@online{Rhodin2016arXiv1607.08659, TITLE = {General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Casas, Dan and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1607.08659}, EPRINT = {1607.08659}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation -- skeleton, volumetric shape, appearance, and optionally a body surface -- and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.}, }
Endnote
%0 Report %A Rhodin, Helge %A Robertini, Nadia %A Casas, Dan %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9883-C %U http://arxiv.org/abs/1607.08659 %D 2016 %X Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Rhodin, H., Robertini, N., Casas, D., Richardt, C., Seidel, H.-P., and Theobalt, C. 2016c. General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues. Computer Vision -- ECCV 2016, Springer.
Export
BibTeX
@inproceedings{RhodinECCV2016, TITLE = {General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Casas, Dan and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-3-319-46453-4}, DOI = {10.1007/978-3-319-46454-1_31}, PUBLISHER = {Springer}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Computer Vision -- ECCV 2016}, DEBUG = {author: Leibe, Bastian; author: Matas, Jiri; author: Sebe, Nicu; author: Welling, Max}, PAGES = {509--526}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9909}, ADDRESS = {Amsterdam, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Rhodin, Helge %A Robertini, Nadia %A Casas, Dan %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-986D-F %R 10.1007/978-3-319-46454-1_31 %D 2016 %B 14th European Conference on Computer Vision %Z date of event: 2016-10-11 - 2016-10-14 %C Amsterdam, The Netherlands %B Computer Vision -- ECCV 2016 %E Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max %P 509 - 526 %I Springer %@ 978-3-319-46453-4 %B Lecture Notes in Computer Science %N 9909
Rhodin, H., Robertini, N., Richardt, C., Seidel, H.-P., and Theobalt, C. 2016d. A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation. http://arxiv.org/abs/1602.03725.
(arXiv: 1602.03725)
Abstract
Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation.
Export
BibTeX
@online{Rhodin2016arXiv1602.03725, TITLE = {A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.03725}, EPRINT = {1602.03725}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation.}, }
Endnote
%0 Report %A Rhodin, Helge %A Robertini, Nadia %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9875-C %U http://arxiv.org/abs/1602.03725 %D 2016 %X Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Richardt, C., Kim, H., Valgaerts, L., and Theobalt, C. 2016a. Dense Wide-Baseline Scene Flow from Two Handheld Video Cameras. Fourth International Conference on 3D Vision, IEEE Computer Society.
Export
BibTeX
@inproceedings{Richardt3DV2016, TITLE = {Dense Wide-Baseline Scene Flow from Two Handheld Video Cameras}, AUTHOR = {Richardt, Christian and Kim, Hyeongwoo and Valgaerts, Levi and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-5090-5407-7}, DOI = {10.1109/3DV.2016.36}, PUBLISHER = {IEEE Computer Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Fourth International Conference on 3D Vision}, PAGES = {276--285}, ADDRESS = {Stanford, CA, USA}, }
Endnote
%0 Conference Proceedings %A Richardt, Christian %A Kim, Hyeongwoo %A Valgaerts, Levi %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Dense Wide-Baseline Scene Flow from Two Handheld Video Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-557C-9 %R 10.1109/3DV.2016.36 %D 2016 %B Fourth International Conference on 3D Vision %Z date of event: 2016-10-25 - 2016-10-28 %C Stanford, CA, USA %B Fourth International Conference on 3D Vision %P 276 - 285 %I IEEE Computer Society %@ 978-1-5090-5407-7
Richardt, C., Kim, H., Valgaerts, L., and Theobalt, C. 2016b. Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. http://arxiv.org/abs/1609.05115.
(arXiv: 1609.05115)
Abstract
We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.
Export
BibTeX
@online{RichardtarXiv1609.05115, TITLE = {Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras}, AUTHOR = {Richardt, Christian and Kim, Hyeongwoo and Valgaerts, Levi and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1609.05115}, EPRINT = {1609.05115}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.}, }
Endnote
%0 Report %A Richardt, Christian %A Kim, Hyeongwoo %A Valgaerts, Levi %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9AAF-D %U http://arxiv.org/abs/1609.05115 %D 2016 %X We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Robertini, N., de Aguiar, E., Helten, T., and Theobalt, C. 2016a. Efficient Multi-view Performance Capture of Fine-Scale Surface Detail. http://arxiv.org/abs/1602.02023.
(arXiv: 1602.02023)
Abstract
We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.
Export
BibTeX
@online{Robertini_arXiv2016, TITLE = {Efficient Multi-view Performance Capture of Fine-Scale Surface Detail}, AUTHOR = {Robertini, Nadia and de Aguiar, Edilson and Helten, Thomas and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.02023}, DOI = {10.1109/3DV.2014.46}, EPRINT = {1602.02023}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.}, }
Endnote
%0 Report %A Robertini, Nadia %A de Aguiar, Edilson %A Helten, Thomas %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Multi-view Performance Capture of Fine-Scale Surface Detail : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-07CD-A %R 10.1109/3DV.2014.46 %U http://arxiv.org/abs/1602.02023 %D 2016 %X We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
Robertini, N., Casas, D., Rhodin, H., Seidel, H.-P., and Theobalt, C. 2016b. Model-Based Outdoor Performance Capture. Fourth International Conference on 3D Vision, IEEE Computer Society.
Export
BibTeX
@inproceedings{Robertini:2016, TITLE = {Model-Based Outdoor Performance Capture}, AUTHOR = {Robertini, Nadia and Casas, Dan and Rhodin, Helge and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-5090-5407-7}, URL = {http://gvv.mpi-inf.mpg.de/projects/OutdoorPerfcap/}, DOI = {10.1109/3DV.2016.25}, PUBLISHER = {IEEE Computer Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Fourth International Conference on 3D Vision}, PAGES = {166--175}, ADDRESS = {Stanford, CA, USA}, }
Endnote
%0 Conference Proceedings %A Robertini, Nadia %A Casas, Dan %A Rhodin, Helge %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Model-Based Outdoor Performance Capture : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4A6D-2 %R 10.1109/3DV.2016.25 %U http://gvv.mpi-inf.mpg.de/projects/OutdoorPerfcap/ %D 2016 %B Fourth International Conference on 3D Vision %Z date of event: 2016-10-25 - 2016-10-28 %C Stanford, CA, USA %B Fourth International Conference on 3D Vision %P 166 - 175 %I IEEE Computer Society %@ 978-1-5090-5407-7
Serrano, A., Heide, F., Gutierrez, D., Wetzstein, G., and Masia, B. 2016a. Convolutional Sparse Coding for High Dynamic Range Imaging. Computer Graphics Forum (Proc. EUROGRAPHICS 2016) 35, 2.
Export
BibTeX
@article{CSHDR_EG2016, TITLE = {Convolutional Sparse Coding for High Dynamic Range Imaging}, AUTHOR = {Serrano, Ana and Heide, Felix and Gutierrez, Diego and Wetzstein, Gordon and Masia, Belen}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12819}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {35}, NUMBER = {2}, PAGES = {153--163}, BOOKTITLE = {The European Association of Computer Graphics 37th Annual Conference (EUROGRAPHICS 2016)}, }
Endnote
%0 Journal Article %A Serrano, Ana %A Heide, Felix %A Gutierrez, Diego %A Wetzstein, Gordon %A Masia, Belen %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Convolutional Sparse Coding for High Dynamic Range Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-78E5-3 %R 10.1111/cgf.12819 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 2 %& 153 %P 153 - 163 %I Wiley-Blackwell %C Oxford %@ false %B The European Association of Computer Graphics 37th Annual Conference %O EUROGRAPHICS 2016 Lisbon, Portugal, 9th-13th May 2016 EG 2016
Serrano, A., Gutierrez, D., Myszkowski, K., Seidel, H.-P., and Masia, B. 2016b. An Intuitive Control Space for Material Appearance. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{Serrano_MaterialAppearance_2016, TITLE = {An Intuitive Control Space for Material Appearance}, AUTHOR = {Serrano, Ana and Gutierrez, Diego and Myszkowski, Karol and Seidel, Hans-Peter and Masia, Belen}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2980179.2980242}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {186}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Serrano, Ana %A Gutierrez, Diego %A Myszkowski, Karol %A Seidel, Hans-Peter %A Masia, Belen %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T An Intuitive Control Space for Material Appearance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B8-9 %R 10.1145/2980179.2980242 %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 186 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Serrano, A., Gutierrez, D., Myszkowski, K., Seidel, H.-P., and Masia, B. 2016c. Intuitive Editing of Material Appearance. ACM SIGGRAPH 2016 Posters.
Export
BibTeX
@inproceedings{SerranoSIGGRAPH2016, TITLE = {Intuitive Editing of Material Appearance}, AUTHOR = {Serrano, Ana and Gutierrez, Diego and Myszkowski, Karol and Seidel, Hans-Peter and Masia, Belen}, LANGUAGE = {eng}, ISBN = {978-1-4503-4371-8}, DOI = {10.1145/2945078.2945141}, PUBLISHER = {ACM}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {ACM SIGGRAPH 2016 Posters}, PAGES = {1--2}, EID = {63}, ADDRESS = {Anaheim, CA, USA}, }
Endnote
%0 Generic %A Serrano, Ana %A Gutierrez, Diego %A Myszkowski, Karol %A Seidel, Hans-Peter %A Masia, Belen %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Intuitive Editing of Material Appearance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0170-C %R 10.1145/2945078.2945141 %D 2016 %Z name of event: the 43rd International Conference and Exhibition on Computer Graphics & Interactive Techniques %Z date of event: 2016-07-24 - 2016-07-28 %Z place of event: Anaheim, CA, USA %B ACM SIGGRAPH 2016 Posters %P 1 - 2 %Z sequence number: 63 %@ 978-1-4503-4371-8
Sridhar, S., Mueller, F., Zollhöfer, M., Casas, D., Oulasvirta, A., and Theobalt, C. 2016a. Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.
Export
BibTeX
@techreport{Report2016-4-001, TITLE = {Real-time Joint Tracking of a Hand Manipulating an Object from {RGB-D} Input}, AUTHOR = {Sridhar, Srinath and Mueller, Franziska and Zollh{\"o}fer, Michael and Casas, Dan and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Sridhar, Srinath %A Mueller, Franziska %A Zollhöfer, Michael %A Casas, Dan %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-5510-A %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2016 %P 31 p. %X Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness. %B Research Report %@ false
Sridhar, S., Bailly, G., Heydrich, E., Oulasvirta, A., and Theobalt, C. 2016b. FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.
Export
BibTeX
@techreport{Report2016-4-002, TITLE = {{FullHand}: {M}arkerless Skeleton-based Tracking for Free-Hand Interaction}, AUTHOR = {Sridhar, Srinath and Bailly, Gilles and Heydrich, Elias and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Sridhar, Srinath %A Bailly, Gilles %A Heydrich, Elias %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-7456-7 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2016 %P 11 p. %X This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance. %B Research Report %@ false
Sridhar, S., Rhodin, H., Seidel, H.-P., Oulasvirta, A., and Theobalt, C. 2016c. Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model. http://arxiv.org/abs/1602.03860.
(arXiv: 1602.03860)
Abstract
Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.
Export
BibTeX
@online{Sridhar2016arXiv1602.03860, TITLE = {Real-Time Hand Tracking Using a Sum of Anisotropic {Gaussians} Model}, AUTHOR = {Sridhar, Srinath and Rhodin, Helge and Seidel, Hans-Peter and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.03860}, EPRINT = {1602.03860}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.}, }
Endnote
%0 Report %A Sridhar, Srinath %A Rhodin, Helge %A Seidel, Hans-Peter %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9878-6 %U http://arxiv.org/abs/1602.03860 %D 2016 %X Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Sridhar, S., Mueller, F., Oulasvirta, A., and Theobalt, C. 2016d. Fast and Robust Hand Tracking Using Detection-Guided Optimization. http://arxiv.org/abs/1602.04124.
(arXiv: 1602.04124)
Abstract
Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.
Export
BibTeX
@online{SridhararXiv1602.04124, TITLE = {Fast and Robust Hand Tracking Using Detection-Guided Optimization}, AUTHOR = {Sridhar, Srinath and Mueller, Franziska and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.04124}, EPRINT = {1602.04124}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.}, }
Endnote
%0 Report %A Sridhar, Srinath %A Mueller, Franziska %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Fast and Robust Hand Tracking Using Detection-Guided Optimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A76-9 %U http://arxiv.org/abs/1602.04124 %D 2016 %X Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Sridhar, S., Mueller, F., Zollhöfer, M., Casas, D., Oulasvirta, A., and Theobalt, C. 2016e. Real-Time Joint Tracking of a Hand Manipulating an Object from RGB-D Input. Computer Vision -- ECCV 2016, Springer.
Export
BibTeX
@inproceedings{SridharECCV2016, TITLE = {Real-Time Joint Tracking of a Hand Manipulating an Object from {RGB}-{D} Input}, AUTHOR = {Sridhar, Srinath and Mueller, Franziska and Zollh{\"o}fer, Michael and Casas, Dan and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-3-319-46474-9}, DOI = {10.1007/978-3-319-46475-6_19}, PUBLISHER = {Springer}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Computer Vision -- ECCV 2016}, EDITOR = {Leibe, Bastian and Matas, Jiri and Sebe, Nicu and Welling, Max}, PAGES = {294--310}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9906}, ADDRESS = {Amsterdam, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A Mueller, Franziska %A Zollhöfer, Michael %A Casas, Dan %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-Time Joint Tracking of a Hand Manipulating an Object from RGB-D Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A3D-B %R 10.1007/978-3-319-46475-6_19 %D 2016 %B 14th European Conference on Computer Vision %Z date of event: 2016-10-11 - 2016-10-14 %C Amsterdam, The Netherlands %B Computer Vision -- ECCV 2016 %E Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max %P 294 - 310 %I Springer %@ 978-3-319-46474-9 %B Lecture Notes in Computer Science %N 9906
Sridhar, S. 2016. Tracking Hands in Action for Gesture-based Computer Input. urn:nbn:de:bsz:291-scidok-67712.
Export
BibTeX
@phdthesis{SridharPhD2016, TITLE = {Tracking Hands in Action for Gesture-based Computer Input}, AUTHOR = {Sridhar, Srinath}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67712}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Sridhar, Srinath %Y Theobalt, Christian %A referee: Oulasvirta, Antti %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Tracking Hands in Action for Gesture-based Computer Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-631C-3 %U urn:nbn:de:bsz:291-scidok-67712 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XXIII, 161 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6771/
Steinberger, M., Derler, A., Zayer, R., and Seidel, H.-P. 2016. How Naive is Naive SpMV on the GPU? IEEE High Performance Extreme Computing Conference (HPEC 2016), IEEE.
Export
BibTeX
@inproceedings{SteinbergerHPEC2016, TITLE = {How naive is naive {SpMV} on the {GPU}?}, AUTHOR = {Steinberger, Markus and Derler, Andreas and Zayer, Rhaleb and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-5090-3525-0}, DOI = {10.1109/HPEC.2016.7761634}, PUBLISHER = {IEEE}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {IEEE High Performance Extreme Computing Conference (HPEC 2016)}, PAGES = {1--8}, ADDRESS = {Waltham, MA, USA}, }
Endnote
%0 Conference Proceedings %A Steinberger, Markus %A Derler, Andreas %A Zayer, Rhaleb %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T How Naive is Naive SpMV on the GPU? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-98A5-F %R 10.1109/HPEC.2016.7761634 %D 2016 %B IEEE High Performance Extreme Computing Conference %Z date of event: 2016-09-13 - 2016-09-15 %C Waltham, MA, USA %B IEEE High Performance Extreme Computing Conference %P 1 - 8 %I IEEE %@ 978-1-5090-3525-0
Templin, K. 2016. Depth, Shading, and Stylization in Stereoscopic Cinematograph. urn:nbn:de:bsz:291-scidok-64390.
Export
BibTeX
@phdthesis{Templinphd15, TITLE = {Depth, Shading, and Stylization in Stereoscopic Cinematograph}, AUTHOR = {Templin, Krzysztof}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-64390}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Templin, Krzysztof %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Depth, Shading, and Stylization in Stereoscopic Cinematograph : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-19FA-2 %U urn:nbn:de:bsz:291-scidok-64390 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xii, 100 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6439/
Templin, K., Didyk, P., Myszkowski, K., and Seidel, H.-P. 2016. Emulating Displays with Continuously Varying Frame Rates. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{TemplinSIGGRAPH2016, TITLE = {Emulating Displays with Continuously Varying Frame Rates}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925879}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {67}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Emulating Displays with Continuously Varying Frame Rates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-018D-E %R 10.1145/2897824.2925879 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 67 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., and Nießner, M. 2016a. Face2Face: Real-Time Face Capture and Reenactment of RGB Videos. 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), IEEE Computer Society.
Export
BibTeX
@inproceedings{thies2016face, TITLE = {{Face2Face}: {R}eal-Time Face Capture and Reenactment of {RGB} Videos}, AUTHOR = {Thies, Justus and Zollh{\"o}fer, Michael and Stamminger, Marc and Theobalt, Christian and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, ISBN = {978-1-4673-8852-8}, DOI = {10.1109/CVPR.2016.262}, PUBLISHER = {IEEE Computer Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016)}, PAGES = {2387--2395}, ADDRESS = {Las Vegas, NV, USA}, }
Endnote
%0 Conference Proceedings %A Thies, Justus %A Zollhöfer, Michael %A Stamminger, Marc %A Theobalt, Christian %A Nießner, Matthias %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Face2Face: Real-Time Face Capture and Reenactment of RGB Videos : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4A43-B %R 10.1109/CVPR.2016.262 %D 2016 %B 29th IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2016-06-26 - 2016-07-01 %C Las Vegas, NV, USA %B 29th IEEE Conference on Computer Vision and Pattern Recognition %P 2387 - 2395 %I IEEE Computer Society %@ 978-1-4673-8852-8
Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., and Nießner, M. 2016b. FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality. http://arxiv.org/abs/1610.03151.
(arXiv: 1610.03151)
Abstract
We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call.
Export
BibTeX
@online{thies16FaceVR, TITLE = {{FaceVR}: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality}, AUTHOR = {Thies, Justus and Zollh{\"o}fer, Michael and Stamminger, Marc and Theobalt, Christian and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1610.03151}, EPRINT = {1610.03151}, EPRINTTYPE = {arXiv}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call.}, }
Endnote
%0 Report %A Thies, Justus %A Zollhöfer, Michael %A Stamminger, Marc %A Theobalt, Christian %A Nießner, Matthias %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4A40-2 %U http://arxiv.org/abs/1610.03151 %D 2016 %X We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., and Nießner, M. 2016c. Demo of Face2Face: Real-time Face Capture and Reenactment of RGB Videos. ACM SIGGRAPH 2016 Emerging Technologies, ACM.
Export
BibTeX
@inproceedings{ThiesSIGGRAPH2016, TITLE = {Demo of {Face2Face}: {R}eal-time face capture and reenactment of {RGB} videos}, AUTHOR = {Thies, Justus and Zollh{\"o}fer, Michael and Stamminger, Marc and Theobalt, Christian and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, ISBN = {978-1-4503-4372-5}, DOI = {10.1145/2929464.2929475}, PUBLISHER = {ACM}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {ACM SIGGRAPH 2016 Emerging Technologies}, EID = {5}, ADDRESS = {Anaheim, CA, USA}, }
Endnote
%0 Conference Proceedings %A Thies, Justus %A Zollhöfer, Michael %A Stamminger, Marc %A Theobalt, Christian %A Nießner, Matthias %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Demo of Face2Face: Real-time Face Capture and Reenactment of RGB Videos : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A4C-9 %R 10.1145/2929464.2929475 %D 2016 %B 43rd International Conference and Exhibition on Computer Graphics & Interactive Techniques %Z date of event: 2016-07-24 - 2016-07-28 %C Anaheim, CA, USA %B ACM SIGGRAPH 2016 Emerging Technologies %Z sequence number: 5 %I ACM %@ 978-1-4503-4372-5
Thies, L., Zollhöfer, M., Richardt, C., Theobalt, C., and Greiner, G. 2016d. Real-time Halfway Domain Reconstruction of Motion and Geometry. Fourth International Conference on 3D Vision, IEEE Computer Society.
Export
BibTeX
@inproceedings{Thies3DV2016, TITLE = {Real-time Halfway Domain Reconstruction of Motion and Geometry}, AUTHOR = {Thies, Lucas and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt, Christian and Greiner, G{\"u}nther}, LANGUAGE = {eng}, ISBN = {978-1-5090-5407-7}, DOI = {10.1109/3DV.2016.55}, PUBLISHER = {IEEE Computer Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Fourth International Conference on 3D Vision}, PAGES = {450--459}, ADDRESS = {Stanford, CA, USA}, }
Endnote
%0 Conference Proceedings %A Thies, Lucas %A Zollhöfer, Michael %A Richardt, Christian %A Theobalt, Christian %A Greiner, Günther %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Real-time Halfway Domain Reconstruction of Motion and Geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-B033-8 %R 10.1109/3DV.2016.55 %D 2016 %B Fourth International Conference on 3D Vision %Z date of event: 2016-10-25 - 2016-10-28 %C Stanford, CA, USA %B Fourth International Conference on 3D Vision %P 450 - 459 %I IEEE Computer Society %@ 978-1-5090-5407-7
Velten, A., Wu, D., Masia, B., et al. 2016. Imaging the Propagation of Light through Scenes at Picosecond Resolution. Communications of the ACM 59, 9.
Export
BibTeX
@article{Velten2016, TITLE = {Imaging the Propagation of Light through Scenes at Picosecond Resolution}, AUTHOR = {Velten, Andreas and Wu, Di and Masia, Belen and Jarabo, Adrian and Barsi, Christopher and Joshi, Chinmaya and Lawson, Everett and Bawendi, Moungi and Gutierrez, Diego and Raskar, Ramesh}, LANGUAGE = {eng}, ISSN = {0001-0782}, DOI = {10.1145/2975165}, PUBLISHER = {Association for Computing Machinery, Inc.}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Communications of the ACM}, VOLUME = {59}, NUMBER = {9}, PAGES = {79--86}, }
Endnote
%0 Journal Article %A Velten, Andreas %A Wu, Di %A Masia, Belen %A Jarabo, Adrian %A Barsi, Christopher %A Joshi, Chinmaya %A Lawson, Everett %A Bawendi, Moungi %A Gutierrez, Diego %A Raskar, Ramesh %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations %T Imaging the Propagation of Light through Scenes at Picosecond Resolution : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-7E47-4 %R 10.1145/2975165 %7 2016 %D 2016 %J Communications of the ACM %V 59 %N 9 %& 79 %P 79 - 86 %I Association for Computing Machinery, Inc. %C New York, NY %@ false
Vogelreiter, P., Hofmann, M., Ebner, C., et al. 2016. Visualization-Guided Evaluation of Simulated Minimally Invasive Cancer Treatment. Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM 2016), Eurographics Association.
Export
BibTeX
@inproceedings{Voglreiter:VES:20161284, TITLE = {Visualization-Guided Evaluation of Simulated Minimally Invasive Cancer Treatment}, AUTHOR = {Vogelreiter, Philip and Hofmann, Michael and Ebner, Christoph and Blanco Sequeiros, Roberto and Portugaller, Horst Rupert and F{\"u}tterer, J{\"u}rgen and Moche, Michael and Steinberger, Markus and Schmalstieg, Dieter}, LANGUAGE = {eng}, DOI = {10.2312/vcbm.20161284}, PUBLISHER = {Eurographics Association}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM 2016)}, EDITOR = {Bruckner, Stefan and Preim, Bernhard and Vilanova, Anna}, PAGES = {163--172}, ADDRESS = {Bergen, Norway}, }
Endnote
%0 Conference Proceedings %A Vogelreiter, Philip %A Hofmann, Michael %A Ebner, Christoph %A Blanco Sequeiros, Roberto %A Portugaller, Horst Rupert %A Fütterer, Jürgen %A Moche, Michael %A Steinberger, Markus %A Schmalstieg, Dieter %+ External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Visualization-Guided Evaluation of Simulated Minimally Invasive Cancer Treatment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-98CD-8 %R 10.2312/vcbm.20161284 %D 2016 %B Eurographics Workshop on Visual Computing for Biology and Medicine %Z date of event: 2016-09-07 - 2016-09-09 %C Bergen, Norway %B Eurographics Workshop on Visual Computing for Biology and Medicine %E Bruckner, Stefan; Preim, Bernhard; Vilanova, Anna %P 163 - 172 %I Eurographics Association
Von Radziewsky, P., Eisemann, E., Seidel, H.-P., and Hildebrandt, K. 2016. Optimized Subspaces for Deformation-based Modeling and Shape Interpolation. Computers and Graphics (Proc. SMI 2016) 58.
Export
BibTeX
@article{Radziewsky2016, TITLE = {Optimized Subspaces for Deformation-based Modeling and Shape Interpolation}, AUTHOR = {von Radziewsky, Philipp and Eisemann, Elmar and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, ISSN = {0097-8493}, DOI = {10.1016/j.cag.2016.05.016}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computers and Graphics (Proc. SMI)}, VOLUME = {58}, PAGES = {128--138}, BOOKTITLE = {Shape Modeling International 2016 (SMI 2016)}, }
Endnote
%0 Journal Article %A von Radziewsky, Philipp %A Eisemann, Elmar %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Optimized Subspaces for Deformation-based Modeling and Shape Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0144-0 %R 10.1016/j.cag.2016.05.016 %7 2016 %D 2016 %J Computers and Graphics %V 58 %& 128 %P 128 - 138 %I Elsevier %C Amsterdam %@ false %B Shape Modeling International 2016 %O SMI 2016
Wang, Z., Martinez Esturo, J., Seidel, H.-P., and Weinkauf, T. 2016a. Stream Line–Based Pattern Search in Flows. Computer Graphics Forum.
Export
BibTeX
@article{Wang:Esturo:Seidel:Weinkauf2016, TITLE = {Stream Line--Based Pattern Search in Flows}, AUTHOR = {Wang, Zhongjie and Martinez Esturo, Janick and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12990}, PUBLISHER = {Blackwell-Wiley}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, JOURNAL = {Computer Graphics Forum}, PAGES = {1--12}, }
Endnote
%0 Journal Article %A Wang, Zhongjie %A Martinez Esturo, Janick %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Stream Line–Based Pattern Search in Flows : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-4301-A %R 10.1111/cgf.12990 %7 2016 %D 2016 %J Computer Graphics Forum %O Computer Graphics Forum : journal of the European Association for Computer Graphics Comput. Graph. Forum %& 1 %P 1 - 12 %I Blackwell-Wiley %C Oxford %@ false
Wang, Z., Seidel, H.-P., and Weinkauf, T. 2016b. Multi-field Pattern Matching Based on Sparse Feature Sampling. IEEE Transactions on Visualization and Computer Graphics 22, 1.
Export
BibTeX
@article{Wang2015, TITLE = {Multi-field Pattern Matching Based on Sparse Feature Sampling}, AUTHOR = {Wang, Zhongjie and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2015.2467292}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics}, VOLUME = {22}, NUMBER = {1}, PAGES = {807--816}, }
Endnote
%0 Journal Article %A Wang, Zhongjie %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Multi-field Pattern Matching Based on Sparse Feature Sampling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-1A76-6 %R 10.1109/TVCG.2015.2467292 %7 2015 %D 2016 %J IEEE Transactions on Visualization and Computer Graphics %V 22 %N 1 %& 807 %P 807 - 816 %I IEEE Computer Society %C New York, NY %@ false
Wu, C., Bradley, D., Garrido, P., et al. 2016. Model-Based Teeth Reconstruction. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{Wu2016SGA, TITLE = {Model-Based Teeth Reconstruction}, AUTHOR = {Wu, Chenglei and Bradley, Derek and Garrido, Pablo and Zollh{\"o}fer, Michael and Theobalt, Christian and Gross, Markus and Beeler, Thabo}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {220}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Wu, Chenglei %A Bradley, Derek %A Garrido, Pablo %A Zollhöfer, Michael %A Theobalt, Christian %A Gross, Markus %A Beeler, Thabo %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Model-Based Teeth Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-23D0-7 %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 220 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
2015
Ao, H., Zhang, Y., Jarabo, A., et al. 2015. Light Field Editing Based on Reparameterization. Advances in Multimedia Information Processing -- PCM 2015, Springer.
Export
BibTeX
@inproceedings{AoPCM2015, TITLE = {Light Field Editing Based on Reparameterization}, AUTHOR = {Ao, Hongbo and Zhang, Yongbing and Jarabo, Adrian and Masia, Belen and Liu, Yebin and Gutierrez, Diego and Dai, Qionghai}, LANGUAGE = {eng}, ISBN = {978-3-319-24074-9}, DOI = {10.1007/978-3-319-24075-6_58}, PUBLISHER = {Springer}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Advances in Multimedia Information Processing -- PCM 2015}, EDITOR = {Ho, Yo-Sung and Sang, Jitao and Ro, Yong Man and Kim, Junmo and Wu, Fei}, PAGES = {601--610}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9314}, ADDRESS = {Gwangju, South Korea}, }
Endnote
%0 Conference Proceedings %A Ao, Hongbo %A Zhang, Yongbing %A Jarabo, Adrian %A Masia, Belen %A Liu, Yebin %A Gutierrez, Diego %A Dai, Qionghai %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Light Field Editing Based on Reparameterization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-42DD-0 %R 10.1007/978-3-319-24075-6_58 %D 2015 %B 16th Pacific-Rim Conference on Multimedia %Z date of event: 2015-09-16 - 2015-09-18 %C Gwangju, South Korea %B Advances in Multimedia Information Processing -- PCM 2015 %E Ho, Yo-Sung; Sang, Jitao; Ro, Yong Man; Kim, Junmo; Wu, Fei %P 601 - 610 %I Springer %@ 978-3-319-24074-9 %B Lecture Notes in Computer Science %N 9314
Arpa, S., Ritschel, T., Myszkowski, K., Çapin, T., and Seidel, H.-P. 2015. Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image. Computer Graphics Forum 34, 1.
Export
BibTeX
@article{arpa2014purkinje, TITLE = {Purkinje Images: {Conveying} Different Content for Different Luminance Adaptations in a Single Image}, AUTHOR = {Arpa, Sami and Ritschel, Tobias and Myszkowski, Karol and {\c C}apin, Tolga and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12463}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum}, VOLUME = {34}, NUMBER = {1}, PAGES = {116--126}, }
Endnote
%0 Journal Article %A Arpa, Sami %A Ritschel, Tobias %A Myszkowski, Karol %A Çapin, Tolga %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D0B-6 %R 10.1111/cgf.12463 %7 2014-10-18 %D 2015 %J Computer Graphics Forum %V 34 %N 1 %& 116 %P 116 - 126 %I Wiley-Blackwell %C Oxford
Bachynskyi, M. 2015. Physical Ergonomics of Tablet Interaction while Sitting. Abstracts of the 39th Annual Meeting of the American Society of Biomechanics.
Export
BibTeX
@inproceedings{Bachynskyi2015, TITLE = {Physical Ergonomics of Tablet Interaction while Sitting}, AUTHOR = {Bachynskyi, Myroslav}, LANGUAGE = {eng}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Abstracts of the 39th Annual Meeting of the American Society of Biomechanics}, PAGES = {232--233}, ADDRESS = {Columbus, OH, USA}, }
Endnote
%0 Conference Proceedings %A Bachynskyi, Myroslav %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Physical Ergonomics of Tablet Interaction while Sitting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-17CB-2 %D 2015 %B 39th Annual Meeting of the American Society of Biomechanics %Z date of event: 2015-08-05 - 2015-08-08 %C Columbus, OH, USA %B Abstracts of the 39th Annual Meeting of the American Society of Biomechanics %P 232 - 233 %U http://www.asbweb.org/conferences/2015/abstracts/PD5E_3--Physical%20Ergonomics%20Of%20Tablet%20Interaction%20While%20Sitting--(Bachynskyi).pdf
Bachynskyi, M., Palmas, G., Oulasvirta, A., and Weinkauf, T. 2015a. Informing the Design of Novel Input Methods with Muscle Coactivation Clustering. ACM Transactions on Computer-Human Interaction 21, 6.
Export
BibTeX
@article{bachynskyi2014informing, TITLE = {Informing the Design of Novel Input Methods with Muscle Coactivation Clustering}, AUTHOR = {Bachynskyi, Myroslav and Palmas, Gregorio and Oulasvirta, Antti and Weinkauf, Tino}, LANGUAGE = {eng}, DOI = {10.1145/2687921}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Computer-Human Interaction}, VOLUME = {21}, NUMBER = {6}, PAGES = {1--25}, EID = {30}, }
Endnote
%0 Journal Article %A Bachynskyi, Myroslav %A Palmas, Gregorio %A Oulasvirta, Antti %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Informing the Design of Novel Input Methods with Muscle Coactivation Clustering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D58-8 %R 10.1145/2687921 %7 2015 %D 2015 %J ACM Transactions on Computer-Human Interaction %O TOCHI %V 21 %N 6 %& 1 %P 1 - 25 %Z sequence number: 30 %I ACM %C New York, NY
Bachynskyi, M., Palmas, G., Oulasvirta, A., Steimle, J., and Weinkauf, T. 2015b. Performance and Ergonomics of Touch Surfaces: A Comparative Study Using Biomechanical Simulation. CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{BachynskyiCHI2015, TITLE = {Performance and Ergonomics of Touch Surfaces: {A} Comparative Study Using Biomechanical Simulation}, AUTHOR = {Bachynskyi, Myroslav and Palmas, Gregorio and Oulasvirta, Antti and Steimle, J{\"u}rgen and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-1-4503-3145-6}, DOI = {10.1145/2702123.2702607}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems}, PAGES = {1817--1826}, ADDRESS = {Seoul, Korea}, }
Endnote
%0 Conference Proceedings %A Bachynskyi, Myroslav %A Palmas, Gregorio %A Oulasvirta, Antti %A Steimle, Jürgen %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Performance and Ergonomics of Touch Surfaces: A Comparative Study Using Biomechanical Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-6658-5 %R 10.1145/2702123.2702607 %D 2015 %B 33rd ACM SIGCHI Conference on Human Factors in Computing Systems %Z date of event: 2015-04-18 - 2015-04-23 %C Seoul, Korea %B CHI 2015 %P 1817 - 1826 %I ACM %@ 978-1-4503-3145-6
Bientinesi, P., Herrero, J.R., Quintana-Ortí, E.S., and Strzodka, R. 2015. Parallel Computing on Graphics Processing Units and Heterogeneous Platforms. Concurrency and Computing: Practice and Experience 27, 6.
Export
BibTeX
@article{Bientinesi2014, TITLE = {Parallel Computing on Graphics Processing Units and Heterogeneous Platforms}, AUTHOR = {Bientinesi, Paolo and Herrero, Jos{\'e} R. and Quintana-Ort{\'i}, Enrique S. and Strzodka, Robert}, LANGUAGE = {eng}, ISSN = {1532-0626}, DOI = {10.1002/cpe.3411}, PUBLISHER = {Wiley}, ADDRESS = {Chichester, UK}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Concurrency and Computing: Practice and Experience}, VOLUME = {27}, NUMBER = {6}, PAGES = {1525--1527}, }
Endnote
%0 Journal Article %A Bientinesi, Paolo %A Herrero, José R. %A Quintana-Ortí, Enrique S. %A Strzodka, Robert %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Parallel Computing on Graphics Processing Units and Heterogeneous Platforms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-BF47-A %R 10.1002/cpe.3411 %7 2014-10-07 %D 2015 %J Concurrency and Computing: Practice and Experience %V 27 %N 6 %& 1525 %P 1525 - 1527 %I Wiley %C Chichester, UK %@ false
Brandt, C., Seidel, H.-P., and Hildebrandt, K. 2015. Optimal Spline Approximation via ℓ₀-Minimization. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{Brandt2015, TITLE = {Optimal Spline Approximation via $\ell_0$-Minimization}, AUTHOR = {Brandt, Christopher and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12589}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {617--626}, BOOKTITLE = {The 36th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Brandt, Christopher %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Optimal Spline Approximation via ℓ₀-Minimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D67-5 %R 10.1111/cgf.12589 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 617 %P 617 - 626 %I Wiley-Blackwell %C Oxford %B The 36th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 4th - 8th May 2015, Kongresshaus in Zürich, Switzerland
Calagari, K., Elgharib, M., Didyk, P., Kaspar, A., Matusik, W., and Hefeeda, M. 2015. Gradient-based 2D-to-3D Conversion for Soccer Videos. Proceedings of the 2015 ACM Multimedia Conference, ACM.
Export
BibTeX
@inproceedings{calagari2015gradient, TITLE = {Gradient-based {2D}-to-{3D} Conversion for Soccer Videos}, AUTHOR = {Calagari, Kiana and Elgharib, Mohamed and Didyk, Piotr and Kaspar, Alexandre and Matusik, Wojciech and Hefeeda, Mohamed}, LANGUAGE = {eng}, ISBN = {978-1-4503-3459-4}, DOI = {10.1145/2733373.2806262}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Proceedings of the 2015 ACM Multimedia Conference}, PAGES = {331--340}, ADDRESS = {Brisbane, Australia}, }
Endnote
%0 Conference Proceedings %A Calagari, Kiana %A Elgharib, Mohamed %A Didyk, Piotr %A Kaspar, Alexandre %A Matusik, Wojciech %A Hefeeda, Mohamed %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Gradient-based 2D-to-3D Conversion for Soccer Videos : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4BA3-D %R 10.1145/2733373.2806262 %D 2015 %B 23rd ACM International Conference on Multimedia %Z date of event: 2015-10-26 - 2015-10-30 %C Brisbane, Australia %B Proceedings of the 2015 ACM Multimedia Conference %P 331 - 340 %I ACM %@ 978-1-4503-3459-4
Casas, D., Richardt, C., Collomosse, J., Theobalt, C., and Hilton, A. 2015. 4D Model Flow: Precomputed Appearance Alignment for Real-time 4D Video Interpolation. Computer Graphics Forum (Proc. Pacific Graphics 2015) 34, 7.
Export
BibTeX
@article{CasasPG2015, TITLE = {{4D} Model Flow: {P}recomputed Appearance Alignment for Real-time {4D} Video Interpolation}, AUTHOR = {Casas, Dan and Richardt, Christian and Collomosse, John and Theobalt, Christian and Hilton, Adrian}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.12756}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {34}, NUMBER = {7}, PAGES = {173--182}, BOOKTITLE = {The 23rd Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2015)}, EDITOR = {Mitra, N. J. and Stam, J. and Xu, K.}, }
Endnote
%0 Journal Article %A Casas, Dan %A Richardt, Christian %A Collomosse, John %A Theobalt, Christian %A Hilton, Adrian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T 4D Model Flow: Precomputed Appearance Alignment for Real-time 4D Video Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5347-8 %R 10.1111/cgf.12756 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 7 %& 173 %P 173 - 182 %I Wiley-Blackwell %C Oxford, UK %@ false %B The 23rd Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2015 PG 2015 Tsinghua University, Beijing, October 7 – 9, 2015
Castaldo, F., Zamir, A., Angst, R., Palmieri, F., and Savarese, S. 2015. Semantic Cross-View Matching. IEEE International Conference on Computer Vision Workshops (ICCVW 2015), IEEE Computer Society.
Export
BibTeX
@inproceedings{AngstICCV_W2015, TITLE = {Semantic Cross-View Matching}, AUTHOR = {Castaldo, Francesco and Zamir, Amir and Angst, Roland and Palmieri, Francesco and Savarese, Silvio}, LANGUAGE = {eng}, ISBN = {978-1-4673-9711-7}, DOI = {10.1109/ICCVW.2015.137}, PUBLISHER = {IEEE Computer Society}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE International Conference on Computer Vision Workshops (ICCVW 2015)}, PAGES = {1044--1052}, ADDRESS = {Santiago, Chile}, }
Endnote
%0 Conference Proceedings %A Castaldo, Francesco %A Zamir, Amir %A Angst, Roland %A Palmieri, Francesco %A Savarese, Silvio %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Semantic Cross-View Matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-45A4-0 %R 10.1109/ICCVW.2015.137 %D 2015 %B IEEE International Conference on Computer Vision Workshops %Z date of event: 2015-12-11 - 2015-12-18 %C Santiago, Chile %B IEEE International Conference on Computer Vision Workshops %P 1044 - 1052 %I IEEE Computer Society %@ 978-1-4673-9711-7 %U http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w28/papers/Castaldo_Semantic_Cross-View_Matching_ICCV_2015_paper.pdf
Elhayek, A., de Aguiar, E., Tompson, J., et al. 2015a. Efficient ConvNet-based Marker-less Motion Capture in General Scenes with a Low Number of Cameras. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), IEEE.
Export
BibTeX
@inproceedings{Elhayek15cvpr, TITLE = {Efficient {ConvNet}-based Marker-less Motion Capture in General Scenes with a Low Number of Cameras}, AUTHOR = {Elhayek, Ahmed and de Aguiar, Edilson and Tompson, Jonathan and Jain, Arjun and Pishchulin, Leonid and Andriluka, Mykhaylo and Bregler, Chri and Schiele, Bernt and Theobalt, Christian}, LANGUAGE = {eng}, DOI = {10.1109/CVPR.2015.7299005}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)}, PAGES = {3810--3818}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Elhayek, Ahmed %A de Aguiar, Edilson %A Tompson, Jonathan %A Jain, Arjun %A Pishchulin, Leonid %A Andriluka, Mykhaylo %A Bregler, Chri %A Schiele, Bernt %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient ConvNet-based Marker-less Motion Capture in General Scenes with a Low Number of Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-01B7-F %R 10.1109/CVPR.2015.7299005 %D 2015 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2015-06-08 - 2015-06-10 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 3810 - 3818 %I IEEE
Elhayek, A. 2015. Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups. .
Export
BibTeX
@phdthesis{ElhayekPhd15, TITLE = {Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups}, AUTHOR = {Elhayek, Ahmed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Elhayek, Ahmed %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A0-4 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P XIV, 124 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6325/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Elhayek, A., Stoll, C., Kim, K.J., and Theobalt, C. 2015b. Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters. Computer Graphics Forum 34, 6.
Export
BibTeX
@article{CGF:CGF12519, TITLE = {Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters}, AUTHOR = {Elhayek, Ahmed and Stoll, Carsten and Kim, Kil Joong and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12519}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum}, VOLUME = {34}, NUMBER = {6}, PAGES = {86--98}, }
Endnote
%0 Journal Article %A Elhayek, Ahmed %A Stoll, Carsten %A Kim, Kil Joong %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF1A-0 %R 10.1111/cgf.12519 %7 2014-12-11 %D 2015 %J Computer Graphics Forum %V 34 %N 6 %& 86 %P 86 - 98 %I Wiley-Blackwell %C Oxford %@ false
Garrido, P., Valgaerts, L., Sarmadi, H., et al. 2015. VDub: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{Garrido15, TITLE = {{VDub}: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track}, AUTHOR = {Garrido, Pablo and Valgaerts, Levi and Sarmadi, Hamid and Steiner, Ingmar and Varanasi, Kiran and Perez, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12552}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {193--204}, BOOKTITLE = {The 38th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Garrido, Pablo %A Valgaerts, Levi %A Sarmadi, Hamid %A Steiner, Ingmar %A Varanasi, Kiran %A Perez, Patrick %A Theobalt, Christian %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T VDub: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF2B-8 %R 10.1111/cgf.12552 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 193 %P 193 - 204 %I Wiley-Blackwell %C Oxford %@ false %B The 38th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 4th – 8th May 2015 , Kongresshaus in Zürich, Switzerland
Georgiev, I. 2015. Path Sampling Techniques for Efficient Light Transport Simulation. .
Export
BibTeX
@phdthesis{Georgievphd15, TITLE = {Path Sampling Techniques for Efficient Light Transport Simulation}, AUTHOR = {Georgiev, Iliyan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Georgiev, Iliyan %Y Slussalek, Philipp %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Path Sampling Techniques for Efficient Light Transport Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E59-9 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 162 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6152/
Goldlücke, B., Klehm, O., Wanner, S., and Eisemann, E. 2015. Plenoptic Cameras. In: Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality. CRC Press, Boca Raton, FL.
Export
BibTeX
@incollection{KlehmChapter5, TITLE = {Plenoptic Cameras}, AUTHOR = {Goldl{\"u}cke, Bastian and Klehm, Oliver and Wanner, Sven and Eisemann, Elmar}, LANGUAGE = {eng}, ISBN = {978-1482243819}, PUBLISHER = {CRC Press}, ADDRESS = {Boca Raton, FL}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality}, EDITOR = {Magnor, Marcus A. and Grau, Oliver and Sorkine-Hornung, Olga and Theobalt, Christian}, PAGES = {65--78}, }
Endnote
%0 Book Section %A Goldlücke, Bastian %A Klehm, Oliver %A Wanner, Sven %A Eisemann, Elmar %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Plenoptic Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6526-6 %D 2015 %B Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality %E Magnor, Marcus A.; Grau, Oliver; Sorkine-Hornung, Olga; Theobalt, Christian %P 65 - 78 %I CRC Press %C Boca Raton, FL %@ 978-1482243819
Granados, M., Aydın, T.O., Tena, J.R., Lalonde, J.-F., and Theobalt, C. 2015a. Contrast-Use Metrics for Tone Mapping Images. IEEE International Conference on Computational Photography (ICCP 2015), IEEE.
Export
BibTeX
@inproceedings{granados2015contrast, TITLE = {Contrast-Use Metrics for Tone Mapping Images}, AUTHOR = {Granados, Miguel and Ayd{\i}n, Tun{\c c} Ozan and Tena, J. Rafael and Lalonde, Jean-Fran{\c c}ois and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4799-8667-5}, DOI = {10.1109/ICCPHOT.2015.7168364}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE International Conference on Computational Photography (ICCP 2015)}, PAGES = {1--8}, ADDRESS = {Houston, TX, USA}, }
Endnote
%0 Conference Proceedings %A Granados, Miguel %A Aydın, Tunç Ozan %A Tena, J. Rafael %A Lalonde, Jean-François %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Contrast-Use Metrics for Tone Mapping Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6521-0 %R 10.1109/ICCPHOT.2015.7168364 %D 2015 %B IEEE International Conference on Computational Photography %Z date of event: 2015-04-24 - 2015-04-26 %C Houston, TX, USA %B IEEE International Conference on Computational Photography %P 1 - 8 %I IEEE %@ 978-1-4799-8667-5
Granados, M., Aydin, T.O., Tena, J.R., Lalonde, J.-F., and Theobalt, C. 2015b. HDR Image Noise Estimation for Denoising Tone Mapped Images. Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015), ACM.
Abstract
<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>
Export
BibTeX
@inproceedings{GranadosCVMP2015, TITLE = {{HDR} Image Noise Estimation for Denoising Tone Mapped Images}, AUTHOR = {Granados, Miguel and Aydin, Tunc Ozan and Tena, J. Rafael and Lalonde, Jean-Fran{\c c}ois and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4503-3560-7}, DOI = {10.1145/2824840.2824847}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2015}, ABSTRACT = {<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>}, BOOKTITLE = {Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015)}, EDITOR = {Collomosse, John and Cosker, Darren}, EID = {7}, ADDRESS = {London, UK}, }
Endnote
%0 Conference Proceedings %A Granados, Miguel %A Aydin, Tunc Ozan %A Tena, J. Rafael %A Lalonde, Jean-Fran&#231;ois %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T HDR Image Noise Estimation for Denoising Tone Mapped Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5335-0 %R 10.1145/2824840.2824847 %D 2015 %B 12th European Conference on Visual Media Production %Z date of event: 2014-11-24 - 2014-11-25 %C London, UK %X <p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p> %B Proceedings of the 12th European Conference on Visual Media Production %E Collomosse, John; Cosker, Darren %Z sequence number: 7 %I ACM %@ 978-1-4503-3560-7
Grochulla, M.P. and Thormählen, T. 2015. Combining Photometric Normals and Multi-View Stereo for 3D Reconstruction. Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015), ACM.
Abstract
<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>
Export
BibTeX
@inproceedings{GrochullaCVMP2015, TITLE = {Combining Photometric Normals and Multi-View Stereo for {3D} Reconstruction}, AUTHOR = {Grochulla, Martin Peter and Thorm{\"a}hlen, Thorsten}, LANGUAGE = {eng}, ISBN = {978-1-4503-3560-7}, DOI = {10.1145/2824840.2824846}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2015}, ABSTRACT = {<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>}, BOOKTITLE = {Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015)}, EDITOR = {Collomosse, John and Cosker, Darren}, EID = {7}, ADDRESS = {London, UK}, }
Endnote
%0 Conference Proceedings %A Grochulla, Martin Peter %A Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Combining Photometric Normals and Multi-View Stereo for 3D Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-4DA8-4 %R 10.1145/2824840.2824846 %D 2015 %B 12th European Conference on Visual Media Production %Z date of event: 2014-11-24 - 2014-11-25 %C London, UK %X <p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p> %B Proceedings of the 12th European Conference on Visual Media Production %E Collomosse, John; Cosker, Darren %Z sequence number: 7 %I ACM %@ 978-1-4503-3560-7
Gryaditskaya, Y., Pouli, T., Reinhard, E., Myszkowski, K., and Seidel, H.-P. 2015. Motion Aware Exposure Bracketing for HDR Video. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2015) 34, 4.
Export
BibTeX
@article{Gryaditskaya2015, TITLE = {Motion Aware Exposure Bracketing for {HDR} Video}, AUTHOR = {Gryaditskaya, Yulia and Pouli, Tania and Reinhard, Erik and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12684}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {34}, NUMBER = {4}, PAGES = {119--130}, BOOKTITLE = {Eurographics Symposium on Rendering 2015}, EDITOR = {Lehtinen, Jaakko and Nowrouzezahra, Derek}, }
Endnote
%0 Journal Article %A Gryaditskaya, Yulia %A Pouli, Tania %A Reinhard, Erik %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Motion Aware Exposure Bracketing for HDR Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-15D2-B %R 10.1111/cgf.12684 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 4 %& 119 %P 119 - 130 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2015 %O Eurographics Symposium on Rendering 2015 EGSR 2015 Darmstadt, Germany, June 24th - 26th, 2015
Herzog, R., Mewes, D., Wand, M., Guibas, L., and Seidel, H.-P. 2015. LeSSS: Learned Shared Semantic Spaces for Relating Multi-modal Representations of 3D Shapes. Computer Graphics Forum (Proc. Eurographics Symposium on Geometric Processing 2015) 34, 5.
Export
BibTeX
@article{HerzogSGP2015, TITLE = {{LeSSS}: {L}earned {S}hared {S}emantic {S}paces for Relating Multi-Modal Representations of {3D} Shapes}, AUTHOR = {Herzog, Robert and Mewes, Daniel and Wand, Michael and Guibas, Leonidas and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12703}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Chichester}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Geometric Processing)}, VOLUME = {34}, NUMBER = {5}, PAGES = {141--151}, BOOKTITLE = {Symposium on Geometry Processing 2015 (Eurographics Symposium on Geometric Processing 2015)}, EDITOR = {Ben-Chen, Mirela and Liu, Ligang}, }
Endnote
%0 Journal Article %A Herzog, Robert %A Mewes, Daniel %A Wand, Michael %A Guibas, Leonidas %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T LeSSS: Learned Shared Semantic Spaces for Relating Multi-modal Representations of 3D Shapes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-8E9A-6 %R 10.1111/cgf.12703 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 5 %& 141 %P 141 - 151 %I Wiley-Blackwell %C Chichester %@ false %B Symposium on Geometry Processing 2015 %O Graz, Austria, July 6 - 8, 2015 SGP 2015 Eurographics Symposium on Geometric Processing 2015
Hulea, R.F. 2015. Compressed Vibration Modes for Deformable Objects. .
Export
BibTeX
@mastersthesis{HuleaMaster2015, TITLE = {Compressed Vibration Modes for Deformable Objects}, AUTHOR = {Hulea, Razvan Florin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Hulea, Razvan Florin %Y Hildebrandt, Klaus %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Compressed Vibration Modes for Deformable Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2EAF-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 47 p. %V master %9 master
Jain, A., Chen, C., Thormählen, T., Metaxas, D., and Seidel, H.-P. 2015. Multi-layer Stencil Creation from Images. Computers and Graphics 48.
Export
BibTeX
@article{JainMulti-layer2015, TITLE = {Multi-layer Stencil Creation from Images}, AUTHOR = {Jain, Arjun and Chen, Chao and Thorm{\"a}hlen, Thorsten and Metaxas, Dimitris and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0097-8493}, DOI = {10.1016/j.cag.2015.02.003}, PUBLISHER = {Pergamon}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computers and Graphics}, VOLUME = {48}, PAGES = {11--22}, }
Endnote
%0 Journal Article %A Jain, Arjun %A Chen, Chao %A Thorm&#228;hlen, Thorsten %A Metaxas, Dimitris %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Multi-layer Stencil Creation from Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9C34-A %R 10.1016/j.cag.2015.02.003 %7 2015-02-26 %D 2015 %J Computers and Graphics %V 48 %& 11 %P 11 - 22 %I Pergamon %C New York, NY %@ false
Kellnhofer, P., Ritschel, T., Myszkowski, K., Eisemann, E., and Seidel, H.-P. 2015a. Modeling Luminance Perception at Absolute Threshold. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2015) 34, 4.
Export
BibTeX
@article{Kellnhofer2015a, TITLE = {Modeling Luminance Perception at Absolute Threshold}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Eisemann, Elmar and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12687}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {34}, NUMBER = {4}, PAGES = {155--164}, BOOKTITLE = {Eurographics Symposium on Rendering 2014}, EDITOR = {Lehtinen, Jaakko and Nowrouzezahra, Derek}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Eisemann, Elmar %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Modeling Luminance Perception at Absolute Threshold : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-8E8D-4 %R 10.1111/cgf.12687 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 4 %& 155 %P 155 - 164 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2014 %O Eurographics Symposium on Rendering 2015 EGSR 2015 Darmstadt, Germany, June 24th - 26th, 2015
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2015b. A Transformation-aware Perceptual Image Metric. Human Vision and Electronic Imaging XX (HVEI 2015), SPIE/IS&T.
Abstract
Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations.
Export
BibTeX
@inproceedings{Kellnhofer2015, TITLE = {A Transformation-aware Perceptual Image Metric}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {9781628414844}, DOI = {10.1117/12.2076754}, PUBLISHER = {SPIE/IS\&T}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, ABSTRACT = {Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations.}, BOOKTITLE = {Human Vision and Electronic Imaging XX (HVEI 2015)}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and de Ridder, Huib}, EID = {939408}, SERIES = {Proceedings of SPIE}, VOLUME = {9394}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Transformation-aware Perceptual Image Metric : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-544A-4 %R 10.1117/12.2076754 %D 2015 %B Human Vision and Electronic Imaging XX %Z date of event: 2015-02-08 - 2015-02-12 %C San Francisco, CA, USA %X Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations. %B Human Vision and Electronic Imaging XX %E Rogowitz, Bernice E.; Pappas, Thrasyvoulos N.; de Ridder, Huib %Z sequence number: 939408 %I SPIE/IS&T %@ 9781628414844 %B Proceedings of SPIE %N 9394
Kellnhofer, P., Leimkühler, T., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2015c. What Makes 2D-to-3D Stereo Conversion Perceptually Plausible? Proceedings SAP 2015, ACM.
Export
BibTeX
@inproceedings{Kellnhofer2015SAP, TITLE = {What Makes {2D}-to-{3D} Stereo Conversion Perceptually Plausible?}, AUTHOR = {Kellnhofer, Petr and Leimk{\"u}hler, Thomas and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, ISBN = {978-1-4503-3812-7}, DOI = {10.1145/2804408.2804409}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Proceedings SAP 2015}, PAGES = {59--66}, ADDRESS = {T{\"u}bingen, Germany}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Leimk&#252;hler, Thomas %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T What Makes 2D-to-3D Stereo Conversion Perceptually Plausible? : %U http://hdl.handle.net/11858/00-001M-0000-0029-2460-7 %R 10.1145/2804408.2804409 %D 2015 %B ACM SIGGRAPH Symposium on Applied Perception %Z date of event: 2015-09-13 - 2015-09-14 %C T&#252;bingen, Germany %B Proceedings SAP 2015 %P 59 - 66 %I ACM %@ 978-1-4503-3812-7 %U http://resources.mpi-inf.mpg.de/StereoCueFusion/WhatMakes3D/
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2015a. Semi-supervised Learning with Explicit Relationship Regularization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), IEEE.
Export
BibTeX
@inproceedings{KimCVPR2015, TITLE = {Semi-supervised Learning with Explicit Relationship Regularization}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, DOI = {10.1109/CVPR.2015.7298831}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)}, PAGES = {2188--2196}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Semi-supervised Learning with Explicit Relationship Regularization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A6D-0 %R 10.1109/CVPR.2015.7298831 %D 2015 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2015-06-07 - 2015-06-12 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 2188 - 2196 %I IEEE
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2015b. Local High-order Regularization on Data Manifolds. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), IEEE.
Export
BibTeX
@inproceedings{Kim2015cvpr, TITLE = {Local High-order Regularization on Data Manifolds}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4673-6965-7}, DOI = {10.1109/CVPR.2015.7299186}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)}, PAGES = {5473--5481}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Local High-order Regularization on Data Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A5F-0 %R 10.1109/CVPR.2015.7299186 %D 2015 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2015-06-08 - 2015-06-10 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 5473 - 5481 %I IEEE %@ 978-1-4673-6965-7
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2015c. Context-guided Diffusion for Label Propagation on Graphs. ICCV 2015, IEEE International Conference on Computer Vision, IEEE.
Export
BibTeX
@inproceedings{KimICCV2015, TITLE = {Context-guided Diffusion for Label Propagation on Graphs}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4673-8390-5}, DOI = {10.1109/ICCV.2015.318}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {ICCV 2015, IEEE International Conference on Computer Vision}, PAGES = {2776--2764}, ADDRESS = {Santiago, Chile}, }
Endnote
%0 Conference Proceedings %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Context-guided Diffusion for Label Propagation on Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-52EF-9 %R 10.1109/ICCV.2015.318 %D 2015 %B IEEE International Conference on Computer Vision %Z date of event: 2015-12-13 - 2015-12-16 %C Santiago, Chile %B ICCV 2015 %P 2776 - 2764 %I IEEE %@ 978-1-4673-8390-5 %U http://www.cv-foundation.org/openaccess/content_iccv_2015/html/Kim_Context-Guided_Diffusion_for_ICCV_2015_paper.html
Klehm, O., Rousselle, F., Papas, M., et al. 2015a. Recent Advances in Facial Appearance Capture. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{Klehm2015Recent, TITLE = {Recent Advances in Facial Appearance Capture}, AUTHOR = {Klehm, Oliver and Rousselle, Fabrice and Papas, Marios and Bradley, Derek and Hery, Christophe and Bickel, Bernd and Wojciech, Jarosz and Beeler, Thabo}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12594}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {709--733}, BOOKTITLE = {The 36th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Klehm, Oliver %A Rousselle, Fabrice %A Papas, Marios %A Bradley, Derek %A Hery, Christophe %A Bickel, Bernd %A Wojciech, Jarosz %A Beeler, Thabo %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations %T Recent Advances in Facial Appearance Capture : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-5042-A %R 10.1111/cgf.12594 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 709 %P 709 - 733 %I Wiley-Blackwell %C Oxford %@ false %B The 36th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 Z&#252;rich, Switzerland ; May 4th &#8211; 8th, 2015
Klehm, O., Kol, T.R., Seidel, H.-P., and Eisemann, E. 2015b. Stylized Scattering via Transfer Functions and Occluder Manipulation. Graphics Interface 2015, Graphics Interface Conference, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{KlehmGI2015, TITLE = {Stylized Scattering via Transfer Functions and Occluder Manipulation}, AUTHOR = {Klehm, Oliver and Kol, Timothy R. and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISBN = {978-0-9947868-0-7}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Graphics Interface 2015, Graphics Interface Conference}, EDITOR = {Zhang, Hao Richard and Tang, Tony}, PAGES = {115--121}, ADDRESS = {Halifax, Canada}, }
Endnote
%0 Conference Proceedings %A Klehm, Oliver %A Kol, Timothy R. %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Stylized Scattering via Transfer Functions and Occluder Manipulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D415-8 %D 2015 %B Graphics Interface Conference %Z date of event: 2015-06-03 - 2015-06-05 %C Halifax, Canada %B Graphics Interface 2015 %E Zhang, Hao Richard; Tang, Tony %P 115 - 121 %I Canadian Information Processing Society %@ 978-0-9947868-0-7
Kwon, Y., Kim, K.I., Tompkin, J., Kim, J.H., and Theobalt, C. 2015. Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local Gaussian Processes. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 9.
Export
BibTeX
@article{Kwon:2014:TPAMI, TITLE = {Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local {G}aussian Processes}, AUTHOR = {Kwon, Younghee and Kim, Kwang In and Tompkin, James and Kim, Jin Hyung and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0162-8828}, DOI = {10.1109/TPAMI.2015.2389797}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, VOLUME = {37}, NUMBER = {9}, PAGES = {1792--1805}, }
Endnote
%0 Journal Article %A Kwon, Younghee %A Kim, Kwang In %A Tompkin, James %A Kim, Jin Hyung %A Theobalt, Christian %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local Gaussian Processes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF0A-3 %R 10.1109/TPAMI.2015.2389797 %7 2015-01-09 %D 2015 %J IEEE Transactions on Pattern Analysis and Machine Intelligence %O IEEE Trans. Pattern Anal. Mach. Intell. %V 37 %N 9 %& 1792 %P 1792 - 1805 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Li, C., Wand, M., Wu, X., and Seidel, H.-P. 2015. Approximate 3D Partial Symmetry Detection Using Co-occurrence Analysis. International Conference on 3D Vision, IEEE.
Export
BibTeX
@inproceedings{Li3DV2015, TITLE = {Approximate {3D} Partial Symmetry Detection Using Co-occurrence Analysis}, AUTHOR = {Li, Chuan and Wand, Michael and Wu, Xiaokun and Seidel, Hans-Peter}, ISBN = {978-1-4673-8333-2}, DOI = {10.1109/3DV.2015.55}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {International Conference on 3D Vision}, DEBUG = {author: Theobalt, Christian}, EDITOR = {Brown, Michael and Kosecka, Jana}, PAGES = {425--433}, ADDRESS = {Lyon, France}, }
Endnote
%0 Conference Proceedings %A Li, Chuan %A Wand, Michael %A Wu, Xiaokun %A Seidel, Hans-Peter %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Approximate 3D Partial Symmetry Detection Using Co-occurrence Analysis : %U http://hdl.handle.net/11858/00-001M-0000-002B-34D8-0 %R 10.1109/3DV.2015.55 %D 2015 %B International Conference on 3D Vision %Z date of event: 2015-10-19 - 2015-10-22 %C Lyon, France %B International Conference on 3D Vision %E Brown, Michael; Kosecka, Jana; Theobalt, Christian %P 425 - 433 %I IEEE %@ 978-1-4673-8333-2
Magnor, M.A., Grau, O., Sorkine-Hornung, O., and Theobalt, C., eds. 2015. Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality. CRC Press, Boca Raton, FL.
Export
BibTeX
@book{magnor2015digital, TITLE = {Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality}, EDITOR = {Magnor, Marcus A. and Grau, Oliver and Sorkine-Hornung, Olga and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1482243819}, PUBLISHER = {CRC Press}, ADDRESS = {Boca Raton, FL}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, PAGES = {455 p.}, }
Endnote
%0 Edited Book %A Magnor, Marcus A. %A Grau, Oliver %A Sorkine-Hornung, Olga %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-64EE-3 %@ 978-1482243819 %I CRC Press %C Boca Raton, FL %D 2015 %P 455 p.
Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2015. High Dynamic Range Imaging. In: Wiley Encyclopedia of Electrical and Electronics Engineering. Wiley, New York, NY.
Export
BibTeX
@incollection{MantiukEncyclopedia2015, TITLE = {High Dynamic Range Imaging}, AUTHOR = {Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1002/047134608X.W8265}, PUBLISHER = {Wiley}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Wiley Encyclopedia of Electrical and Electronics Engineering}, EDITOR = {Webster, John G.}, PAGES = {1--42}, }
Endnote
%0 Book Section %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-A376-B %R 10.1002/047134608X.W8265 %D 2015 %8 15.06.2015 %B Wiley Encyclopedia of Electrical and Electronics Engineering %E Webster, John G. %P 1 - 42 %I Wiley %C New York, NY
Michels, D.L. and Desbrun, M. 2015. A Semi-analytical Approach to Molecular Dynamics. Journal of Computational Physics 303.
Export
BibTeX
@article{Michels2015, TITLE = {A Semi-analytical Approach to Molecular Dynamics}, AUTHOR = {Michels, Dominik L. and Desbrun, Mathieu}, LANGUAGE = {eng}, ISSN = {0021-9991}, DOI = {10.1016/j.jcp.2015.10.009}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Journal of Computational Physics}, VOLUME = {303}, PAGES = {336--354}, }
Endnote
%0 Journal Article %A Michels, Dominik L. %A Desbrun, Mathieu %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T A Semi-analytical Approach to Molecular Dynamics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-34FB-C %R 10.1016/j.jcp.2015.10.009 %7 2015 %D 2015 %J Journal of Computational Physics %V 303 %& 336 %P 336 - 354 %I Elsevier %C Amsterdam %@ false
Nalbach, O., Ritschel, T., and Seidel, H.-P. 2015. The Bounced Z-buffer for Indirect Visibility. VMV 2015 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{NalbachVMV2015, TITLE = {The Bounced {Z}-buffer for Indirect Visibility}, AUTHOR = {Nalbach, Oliver and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-905674-95-8}, DOI = {10.2312/vmv.20151261}, PUBLISHER = {Eurographics Association}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {VMV 2015 Vision, Modeling and Visualization}, EDITOR = {Bommes, David and Ritschel, Tobias and Schultz, Thomas}, PAGES = {79--86}, ADDRESS = {Aachen, Germany}, }
Endnote
%0 Conference Proceedings %A Nalbach, Oliver %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T The Bounced Z-buffer for Indirect Visibility : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-F762-F %R 10.2312/vmv.20151261 %D 2015 %B 20th International Symposium on Vision, Modeling and Visualization %Z date of event: 2015-10-07 - 2015-10-09 %C Aachen, Germany %B VMV 2015 Vision, Modeling and Visualization %E Bommes, David; Ritschel, Tobias; Schultz, Thomas %P 79 - 86 %I Eurographics Association %@ 978-3-905674-95-8
Nguyen, C., Nalbach, O., Ritschel, T., and Seidel, H.-P. 2015a. Guiding Image Manipulations Using Shape-appearance Subspaces from Co-alignment of Image Collections. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{NguyenEG2015, TITLE = {Guiding Image Manipulations Using Shape-appearance Subspaces from Co-alignment of Image Collections}, AUTHOR = {Nguyen, Chuong and Nalbach, Oliver and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12548}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {143--154}, BOOKTITLE = {The 36th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Nguyen, Chuong %A Nalbach, Oliver %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Guiding Image Manipulations Using Shape-appearance Subspaces from Co-alignment of Image Collections : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D6A-0 %R 10.1111/cgf.12548 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 143 %P 143 - 154 %I Wiley-Blackwell %C Oxford %B The 36th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 4th &#8211; 8th May 2015, Kongresshaus in Z&#252;rich, Switzerland EG 2015
Nguyen, C., Ritschel, T., and Seidel, H.-P. 2015b. Data-driven Color Manifolds. ACM Transactions on Graphics 34, 2.
Export
BibTeX
@article{NguyenTOG2015, TITLE = {Data-driven Color Manifolds}, AUTHOR = {Nguyen, Chuong and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1145/2699645}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {34}, NUMBER = {2}, EID = {20}, }
Endnote
%0 Journal Article %A Nguyen, Chuong %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Data-driven Color Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-680A-D %R 10.1145/2699645 %7 2015 %D 2015 %J ACM Transactions on Graphics %V 34 %N 2 %Z sequence number: 20 %I ACM %C New York, NY
Nguyen, C. 2015. Data-driven Approaches for Interactive Appearance Editing. urn:nbn:de:bsz:291-scidok-62372.
Export
BibTeX
@phdthesis{NguyenPhD2015, TITLE = {Data-driven Approaches for Interactive Appearance Editing}, AUTHOR = {Nguyen, Chuong}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-62372}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Nguyen, Chuong %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Data-driven Approaches for Interactive Appearance Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-9C47-9 %U urn:nbn:de:bsz:291-scidok-62372 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XVII, 134 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6237/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Olberding, S. 2015. Fabricating Custom-shaped Thin-film Interactive Surfaces. urn:nbn:de:bsz:291-scidok-63285.
Export
BibTeX
@phdthesis{OlberdingPhD2015, TITLE = {Fabricating Custom-shaped Thin-film Interactive Surfaces}, AUTHOR = {Olberding, Simon}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-63285}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Olberding, Simon %Y Steimle, J&#252;rgen %A referee: Kr&#252;ger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Fabricating Custom-shaped Thin-film Interactive Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5EF8-2 %U urn:nbn:de:bsz:291-scidok-63285 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XVI, 145 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6328/
Olberding, S., Ortega, S.S., Hildebrandt, K., and Steimle, J. 2015. Foldio: Digital Fabrication of Interactive and Shape-changing Objects With Foldable Printed Electronics. UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, ACM.
Export
BibTeX
@inproceedings{OlberdingUIST2015, TITLE = {Foldio: {D}igital Fabrication of Interactive and Shape-changing Objects With Foldable Printed Electronics}, AUTHOR = {Olberding, Simon and Ortega, Sergio Soto and Hildebrandt, Klaus and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, DOI = {10.1145/2807442.2807494}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {UIST'15, 28th Annual ACM Symposium on User Interface Software and Technology}, PAGES = {223--232}, ADDRESS = {Charlotte, NC, USA}, }
Endnote
%0 Conference Proceedings %A Olberding, Simon %A Ortega, Sergio Soto %A Hildebrandt, Klaus %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Foldio: Digital Fabrication of Interactive and Shape-changing Objects With Foldable Printed Electronics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-6646-D %R 10.1145/2807442.2807494 %D 2015 %B 28th Annual ACM Symposium on User Interface Software and Technology %Z date of event: 2015-11-08 - 2015-11-11 %C Charlotte, NC, USA %B UIST'15 %P 223 - 232 %I ACM
Pepik, B. 2015. Richer Object Representations for Object Class Detection in Challenging Real World Image. .
Export
BibTeX
@phdthesis{Pepikphd15, TITLE = {Richer Object Representations for Object Class Detection in Challenging Real World Image}, AUTHOR = {Pepik, Bojan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Pepik, Bojan %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Richer Object Representations for Object Class Detection in Challenging Real World Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-7678-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P xii, 219 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6361/
Pepik, B., Stark, M., Gehler, P., Ritschel, T., and Schiele, B. 2015a. 3D Object Class Detection in the Wild. IEEE Conference on Computer Vision and Pattern Recognition Workshops (3DSI 2015), IEEE.
Export
BibTeX
@inproceedings{Pepik3DSI2015, TITLE = {{3D} Object Class Detection in the Wild}, AUTHOR = {Pepik, Bojan and Stark, Michael and Gehler, Peter and Ritschel, Tobias and Schiele, Bernt}, LANGUAGE = {eng}, DOI = {10.1109/CVPRW.2015.7301358}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition Workshops (3DSI 2015)}, PAGES = {1--10}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Pepik, Bojan %A Stark, Michael %A Gehler, Peter %A Ritschel, Tobias %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T 3D Object Class Detection in the Wild : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5935-D %R 10.1109/CVPRW.2015.7301358 %D 2015 %B Workshop on 3D from a Single Image %Z date of event: 2015-06-07 - 2015-07-12 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition Workshops %P 1 - 10 %I IEEE
Pepik, B., Benenson, R., Ritschel, T., and Schiele, B. 2015b. What is Holding Back Convnets for Detection? Pattern Recognition (GCPR 2015), Springer.
Export
BibTeX
@inproceedings{Pepik2015GCPR, TITLE = {What is Holding Back Convnets for Detection?}, AUTHOR = {Pepik, Bojan and Benenson, Rodrigo and Ritschel, Tobias and Schiele, Bernt}, LANGUAGE = {eng}, ISBN = {978-3-319-24946-9}, DOI = {10.1007/978-3-319-24947-6_43}, PUBLISHER = {Springer}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Pattern Recognition (GCPR 2015)}, EDITOR = {Gall, J{\"u}rgen and Gehler, Peter and Leibe, Bastian}, PAGES = {517--528}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9358}, ADDRESS = {Aachen, Germany}, }
Endnote
%0 Conference Proceedings %A Pepik, Bojan %A Benenson, Rodrigo %A Ritschel, Tobias %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T What is Holding Back Convnets for Detection? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5912-C %R 10.1007/978-3-319-24947-6_43 %D 2015 %B 37th German Conference on Pattern Recognition %Z date of event: 2015-10-07 - 2015-10-10 %C Aachen, Germany %B Pattern Recognition %E Gall, J&#252;rgen; Gehler, Peter; Leibe, Bastian %P 517 - 528 %I Springer %@ 978-3-319-24946-9 %B Lecture Notes in Computer Science %N 9358
Pishchulin, L., Wuhrer, S., Helten, T., Theobalt, C., and Schiele, B. 2015. Building Statistical Shape Spaces for 3D Human Modeling. http://arxiv.org/abs/1503.05860.
(arXiv: 1503.05860)
Abstract
Statistical models of 3D human shape and pose learned from scan databases have developed into valuable tools to solve a variety of vision and graphics problems. Unfortunately, most publicly available models are of limited expressiveness as they were learned on very small databases that hardly reflect the true variety in human body shapes. In this paper, we contribute by rebuilding a widely used statistical body representation from the largest commercially available scan database, and making the resulting model available to the community (visit http://humanshape.mpi-inf.mpg.de). As preprocessing several thousand scans for learning the model is a challenge in itself, we contribute by developing robust best practice solutions for scan alignment that quantitatively lead to the best learned models. We make implementations of these preprocessing steps also publicly available. We extensively evaluate the improved accuracy and generality of our new model, and show its improved performance for human body reconstruction from sparse input data.
Export
BibTeX
@online{941x, TITLE = {Building Statistical Shape Spaces for {3D} Human Modeling}, AUTHOR = {Pishchulin, Leonid and Wuhrer, Stefanie and Helten, Thomas and Theobalt, Christian and Schiele, Bernt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1503.05860}, EPRINT = {1503.05860}, EPRINTTYPE = {arXiv}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Statistical models of 3D human shape and pose learned from scan databases have developed into valuable tools to solve a variety of vision and graphics problems. Unfortunately, most publicly available models are of limited expressiveness as they were learned on very small databases that hardly reflect the true variety in human body shapes. In this paper, we contribute by rebuilding a widely used statistical body representation from the largest commercially available scan database, and making the resulting model available to the community (visit http://humanshape.mpi-inf.mpg.de). As preprocessing several thousand scans for learning the model is a challenge in itself, we contribute by developing robust best practice solutions for scan alignment that quantitatively lead to the best learned models. We make implementations of these preprocessing steps also publicly available. We extensively evaluate the improved accuracy and generality of our new model, and show its improved performance for human body reconstruction from sparse input data.}, }
Endnote
%0 Report %A Pishchulin, Leonid %A Wuhrer, Stefanie %A Helten, Thomas %A Theobalt, Christian %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Building Statistical Shape Spaces for 3D Human Modeling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-4B26-F %U http://arxiv.org/abs/1503.05860 %D 2015 %X Statistical models of 3D human shape and pose learned from scan databases have developed into valuable tools to solve a variety of vision and graphics problems. Unfortunately, most publicly available models are of limited expressiveness as they were learned on very small databases that hardly reflect the true variety in human body shapes. In this paper, we contribute by rebuilding a widely used statistical body representation from the largest commercially available scan database, and making the resulting model available to the community (visit http://humanshape.mpi-inf.mpg.de). As preprocessing several thousand scans for learning the model is a challenge in itself, we contribute by developing robust best practice solutions for scan alignment that quantitatively lead to the best learned models. We make implementations of these preprocessing steps also publicly available. We extensively evaluate the improved accuracy and generality of our new model, and show its improved performance for human body reconstruction from sparse input data. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Rhodin, H., Tompkin, J., Kim, K.I., et al. 2015a. Generalizing Wave Gestures from Sparse Examples for Real-time Character Control. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{RhodinSAP2015, TITLE = {Generalizing Wave Gestures from Sparse Examples for Real-time Character Control}, AUTHOR = {Rhodin, Helge and Tompkin, James and Kim, Kwang In and de Aguiar, Edilson and Pfister, Hanspeter and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818082}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--12}, EID = {181}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Rhodin, Helge %A Tompkin, James %A Kim, Kwang In %A de Aguiar, Edilson %A Pfister, Hanspeter %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Generalizing Wave Gestures from Sparse Examples for Real-time Character Control : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2476-8 %R 10.1145/2816795.2818082 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 12 %Z sequence number: 181 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan
Rhodin, H., Robertini, N., Richardt, C., Seidel, H.-P., and Theobalt, C. 2015b. A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation. ICCV 2015, IEEE International Conference on Computer Vision, IEEE.
Export
BibTeX
@inproceedings{RhodinICCV2015, TITLE = {A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4673-8390-5}, DOI = {10.1109/ICCV.2015.94}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {ICCV 2015, IEEE International Conference on Computer Vision}, PAGES = {765--773}, ADDRESS = {Santiago, Chile}, }
Endnote
%0 Conference Proceedings %A Rhodin, Helge %A Robertini, Nadia %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-52DC-4 %R 10.1109/ICCV.2015.94 %D 2015 %B IEEE International Conference on Computer Vision %Z date of event: 2015-12-13 - 2015-12-16 %C Santiago, Chile %B ICCV 2015 %P 765 - 773 %I IEEE %@ 978-1-4673-8390-5 %U http://www.cv-foundation.org/openaccess/content_iccv_2015/html/Rhodin_A_Versatile_Scene_ICCV_2015_paper.html
Richardt, C., Tompkin, J., Bai, J., and Theobalt, C. 2015. User-centric Computational Videography. ACM SIGGRAPH 2015 Courses, ACM.
Export
BibTeX
@inproceedings{richardtSIGGRAPHCourse2015, TITLE = {User-centric Computational Videography}, AUTHOR = {Richardt, Christian and Tompkin, James and Bai, Jiamin and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4503-3634-5}, DOI = {10.1145/2776880.2792705}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {ACM SIGGRAPH 2015 Courses}, PAGES = {1--6}, EID = {25}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Richardt, Christian %A Tompkin, James %A Bai, Jiamin %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T User-centric Computational Videography : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5460-2 %R 10.1145/2776880.2792705 %D 2015 %B The 42nd International Conference and Exhibition on Computer Graphics and Interactive Techniques %Z date of event: 2015-08-09 - 2015-08-13 %C Los Angeles, CA, USA %B ACM SIGGRAPH 2015 Courses %P 1 - 6 %Z sequence number: 25 %I ACM %@ 978-1-4503-3634-5
Schmitz, M., Khalibeigi, M., Balwierz, M., Lissermann, R., Mühlhäuser, M., and Steimle, J. 2015. Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects. UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, ACM.
Export
BibTeX
@inproceedings{SchmitzUIST2015, TITLE = {Capricate: {A} Fabrication Pipeline to Design and {3D} Print Capacitive Touch Sensors for Interactive Objects}, AUTHOR = {Schmitz, Martin and Khalibeigi, Mohammadreza and Balwierz, Matthias and Lissermann, Roman and M{\"u}hlh{\"a}user, Max and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-3779-3}, DOI = {10.1145/2807442.2807503}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {UIST'15, 28th Annual ACM Symposium on User Interface Software and Technology}, PAGES = {253--258}, ADDRESS = {Charlotte, NC, USA}, }
Endnote
%0 Conference Proceedings %A Schmitz, Martin %A Khalibeigi, Mohammadreza %A Balwierz, Matthias %A Lissermann, Roman %A M&#252;hlh&#228;user, Max %A Steimle, J&#252;rgen %+ External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-664A-5 %R 10.1145/2807442.2807503 %D 2015 %B 28th Annual ACM Symposium on User Interface Software and Technology %Z date of event: 2015-11-08 - 2015-11-11 %C Charlotte, NC, USA %B UIST'15 %P 253 - 258 %I ACM %@ 978-1-4503-3779-3
Schulz, C., von Tycowicz, C., Seidel, H.-P., and Hildebrandt, K. 2015. Animating Articulated Characters Using Wiggly Splines. Proceedings SCA 2015, ACM.
Export
BibTeX
@inproceedings{SchulzSCA2015, TITLE = {Animating Articulated Characters Using Wiggly Splines}, AUTHOR = {Schulz, Christian and von Tycowicz, Christoph and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, ISBN = {978-1-4503-3496-9}, DOI = {10.1145/2786784.2786799}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Proceedings SCA 2015}, PAGES = {101--109}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Schulz, Christian %A von Tycowicz, Christoph %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Animating Articulated Characters Using Wiggly Splines : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-8EA3-0 %R 10.1145/2786784.2786799 %D 2015 %B 14th ACM SIGGRAPH / Eurographics Symposium on Computer Animation %Z date of event: 2015-08-07 - 2015-08-09 %C Los Angeles, CA, USA %B Proceedings SCA 2015 %P 101 - 109 %I ACM %@ 978-1-4503-3496-9
Siegl, C., Colaianni, M., Thies, L., et al. 2015. Real-time Pixel Luminance Optimization for Dynamic Multi-projection Mapping. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{siegl2015, TITLE = {Real-time Pixel Luminance Optimization for Dynamic Multi-projection Mapping}, AUTHOR = {Siegl, Christian and Colaianni, Matteo and Thies, Lucas and Thies, Justus and Zollh{\"o}fer, Michael and Izadi, Shahram and Stamminger, Marc and Bauer, Frank}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818111}, PUBLISHER = {Association for Computing Machinery}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, EID = {237}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Siegl, Christian %A Colaianni, Matteo %A Thies, Lucas %A Thies, Justus %A Zollh&#246;fer, Michael %A Izadi, Shahram %A Stamminger, Marc %A Bauer, Frank %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Real-time Pixel Luminance Optimization for Dynamic Multi-projection Mapping : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4A49-0 %R 10.1145/2816795.2818111 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %Z sequence number: 237 %I Association for Computing Machinery %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan
Sridhar, S., Feit, A.M., Theobalt, C., and Oulasvirta, A. 2015a. Investigating the Dexterity of Multi-finger Input for Mid-air Text Entry. CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{sridhar_investigating_2015, TITLE = {Investigating the Dexterity of Multi-finger Input for Mid-air Text Entry}, AUTHOR = {Sridhar, Srinath and Feit, Anna Maria and Theobalt, Christian and Oulasvirta, Antti}, LANGUAGE = {eng}, ISBN = {978-1-4503-3145-6}, DOI = {10.1145/2702123.2702136}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems}, PAGES = {3643--3652}, ADDRESS = {Seoul, Korea}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A Feit, Anna Maria %A Theobalt, Christian %A Oulasvirta, Antti %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Investigating the Dexterity of Multi-finger Input for Mid-air Text Entry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF7B-3 %R 10.1145/2702123.2702136 %D 2015 %B 33rd ACM SIGCHI Conference on Human Factors in Computing Systems %Z date of event: 2015-04-18 - 2015-04-23 %C Seoul, Korea %B CHI 2015 %P 3643 - 3652 %I ACM %@ 978-1-4503-3145-6
Sridhar, S., Mueller, F., Oulasvirta, A., and Theobalt, C. 2015b. Fast and Robust Hand Tracking Using Detection-Guided Optimization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), IEEE.
Export
BibTeX
@inproceedings{Sridhar15cvpr, TITLE = {Fast and Robust Hand Tracking Using Detection-Guided Optimization}, AUTHOR = {Sridhar, Srinath and Mueller, Franziska and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, DOI = {10.1109/CVPR.2015.7298941}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)}, PAGES = {3213--3221}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A Mueller, Franziska %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Fast and Robust Hand Tracking Using Detection-Guided Optimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5456-9 %R 10.1109/CVPR.2015.7298941 %D 2015 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2015-06-07 - 2015-06-12 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 3213 - 3221 %I IEEE
Steimle, J. 2015. Printed Electronics for Human-Computer Interaction. Interactions 22, 3.
Export
BibTeX
@article{SteimlePrinted, TITLE = {Printed Electronics for Human-Computer Interaction}, AUTHOR = {Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISSN = {1072-5520}, DOI = {10.1145/2754304}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Interactions}, VOLUME = {22}, NUMBER = {3}, PAGES = {72--75}, }
Endnote
%0 Journal Article %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Printed Electronics for Human-Computer Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-6642-6 %R 10.1145/2754304 %7 2015 %D 2015 %J Interactions %V 22 %N 3 %& 72 %P 72 - 75 %I ACM %C New York, NY %@ false
Sung, M., Kim, V.G., Angst, R., and Guibas, L. 2015. Data-driven Structural Priors for Shape Completion. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{SungSIGGRAPHAsia2015, TITLE = {Data-driven Structural Priors for Shape Completion}, AUTHOR = {Sung, Minhyuk and Kim, Vladimir G. and Angst, Roland and Guibas, Leonidas}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818094}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--11}, EID = {175}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Sung, Minhyuk %A Kim, Vladimir G. %A Angst, Roland %A Guibas, Leonidas %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Data-driven Structural Priors for Shape Completion : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-07CC-0 %R 10.1145/2816795.2818094 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 11 %Z sequence number: 175 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan
Thies, J., Zollhöfer, M., Nießner, M., Valgaerts, L., Stamminger, M., and Theobalt, C. 2015. Real-time Expression Transfer for Facial Reenactment. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{ThiesSAP2015, TITLE = {Real-time Expression Transfer for Facial Reenactment}, AUTHOR = {Thies, Justus and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Valgaerts, Levi and Stamminger, Marc and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818056}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--14}, EID = {183}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Thies, Justus %A Zollh&#246;fer, Michael %A Nie&#223;ner, Matthias %A Valgaerts, Levi %A Stamminger, Marc %A Theobalt, Christian %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Expression Transfer for Facial Reenactment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2478-4 %R 10.1145/2816795.2818056 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 14 %Z sequence number: 183 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan
Vangorp, P., Myszkowski, K., Graf, E., and Mantiuk, R. 2015a. An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation). Perception (Proc. ECVP 2015) 44, S1.
Export
BibTeX
@article{VangeropECVP2015, TITLE = {An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation)}, AUTHOR = {Vangorp, Peter and Myszkowski, Karol and Graf, Erich and Mantiuk, Rafa{\l}}, LANGUAGE = {eng}, ISSN = {0301-0066}, DOI = {10.1177/0301006615598674}, PUBLISHER = {SAGE}, ADDRESS = {London}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015-08}, JOURNAL = {Perception (Proc. ECVP)}, VOLUME = {44}, NUMBER = {S1}, PAGES = {98--98}, EID = {1T3C001}, BOOKTITLE = {38th European Conference on Visual Perception (ECVP 2015)}, }
Endnote
%0 Journal Article %A Vangorp, Peter %A Myszkowski, Karol %A Graf, Erich %A Mantiuk, Rafa&#322; %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-245C-4 %R 10.1177/0301006615598674 %7 2015 %D 2015 %J Perception %V 44 %N S1 %& 98 %P 98 - 98 %Z sequence number: 1T3C001 %I SAGE %C London %@ false %B 38th European Conference on Visual Perception %O ECVP 2015 Liverpool
Vangorp, P., Myszkowski, K., Graf, E.W., and Mantiuk, R.K. 2015b. A Model of Local Adaptation. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{Vangorp:2015:LocalAdaptationSIGAsia, TITLE = {A Model of Local Adaptation}, AUTHOR = {Vangorp, Peter and Myszkowski, Karol and Graf, Erich W. and Mantiuk, Rafa{\l} K.}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818086}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--13}, EID = {166}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Vangorp, Peter %A Myszkowski, Karol %A Graf, Erich W. %A Mantiuk, Rafa&#322; K. %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Model of Local Adaptation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2455-1 %R 10.1145/2816795.2818086 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 13 %Z sequence number: 166 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan %U http://resources.mpi-inf.mpg.de/LocalAdaptation/
Von Tycowicz, C., Schulz, C., Seidel, H.-P., and Hildebrandt, K. 2015. Real-time Nonlinear Shape Interpolation. ACM Transactions on Graphics 34, 3.
Export
BibTeX
@article{Tycowicz2015, TITLE = {Real-time Nonlinear Shape Interpolation}, AUTHOR = {von Tycowicz, Christoph and Schulz, Christian and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, DOI = {10.1145/2729972}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {34}, NUMBER = {3}, EID = {34}, }
Endnote
%0 Journal Article %A von Tycowicz, Christoph %A Schulz, Christian %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Nonlinear Shape Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D65-9 %R 10.1145/2729972 %7 2015 %D 2015 %J ACM Transactions on Graphics %V 34 %N 3 %Z sequence number: 34 %I ACM %C New York, NY
Wang, Z. 2015. Pattern Search for the Visualization of Scalar, Vector, and Line Fields. .
Export
BibTeX
@phdthesis{WangPhd15, TITLE = {Pattern Search for the Visualization of Scalar, Vector, and Line Fields}, AUTHOR = {Wang, Zhongjie}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Wang, Zhongjie %Y Seidel, Hans-Peter %A referee: Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Pattern Search for the Visualization of Scalar, Vector, and Line Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A5-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 103 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6330/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Wang, Z., Seidel, H.-P., and Weinkauf, T. 2015. Hierarchical Hashing for Pattern Search in 3D Vector Fields. VMV 2015 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{WangVMV2015, TITLE = {Hierarchical Hashing for Pattern Search in {3D} Vector Fields}, AUTHOR = {Wang, Zhongjie and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-3-905674-95-8}, DOI = {10.2312/vmv.20151256}, PUBLISHER = {Eurographics Association}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {VMV 2015 Vision, Modeling and Visualization}, EDITOR = {Bommes, David and Ritschel, Tobias and Schultz, Thomas}, PAGES = {41--48}, ADDRESS = {Aachen, Germany}, }
Endnote
%0 Conference Proceedings %A Wang, Zhongjie %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Hierarchical Hashing for Pattern Search in 3D Vector Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-F760-4 %R 10.2312/vmv.20151256 %D 2015 %B 20th International Symposium on Vision, Modeling and Visualization %Z date of event: 2015-10-07 - 2015-10-09 %C Aachen, Germany %B VMV 2015 Vision, Modeling and Visualization %E Bommes, David; Ritschel, Tobias; Schultz, Thomas %P 41 - 48 %I Eurographics Association %@ 978-3-905674-95-8
Weigel, M., Lu, T., Oulasvirta, A., Bailly, G., Majidi, C., and Steimle, J. 2015. iSkin: Flexible, Stretchable and Visually Customizable On-body Touch Sensors for Mobile Computing. CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{Weigel2015, TITLE = {{iSkin}: {Flexible}, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing}, AUTHOR = {Weigel, Martin and Lu, Tong and Oulasvirta, Antti and Bailly, Gilles and Majidi, Carmel and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-3145-6}, DOI = {10.1145/2702123.2702391}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems}, PAGES = {2991--3000}, ADDRESS = {Seoul, Korea}, }
Endnote
%0 Conference Proceedings %A Weigel, Martin %A Lu, Tong %A Oulasvirta, Antti %A Bailly, Gilles %A Majidi, Carmel %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T iSkin: Flexible, Stretchable and Visually Customizable On-body Touch Sensors for Mobile Computing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFA0-C %R 10.1145/2702123.2702391 %D 2015 %B 33rd ACM SIGCHI Conference on Human Factors in Computing Systems %Z date of event: 2015-04-18 - 2015-04-23 %C Seoul, Korea %B CHI 2015 %P 2991 - 3000 %I ACM %@ 978-1-4503-3145-6
Zollhöfer, M., Dai, A., Innmann, M., et al. 2015. Shading-based Refinement on Volumetric Signed Distance Functions. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2015) 34, 4.
Export
BibTeX
@article{ZollhoeferSIGGRAPH2015, TITLE = {Shading-based Refinement on Volumetric Signed Distance Functions}, AUTHOR = {Zollh{\"o}fer, Michael and Dai, Angela and Innmann, Matthias and Wu, Chenglei and Stamminger, Marc and Theobalt, Christian and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2766887}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {34}, NUMBER = {4}, EID = {96}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2015}, }
Endnote
%0 Journal Article %A Zollh&#246;fer, Michael %A Dai, Angela %A Innmann, Matthias %A Wu, Chenglei %A Stamminger, Marc %A Theobalt, Christian %A Nie&#223;ner, Matthias %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Shading-based Refinement on Volumetric Signed Distance Functions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-528D-5 %R 10.1145/2766887 %7 2015 %D 2015 %J ACM Transactions on Graphics %V 34 %N 4 %Z sequence number: 96 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2015 %O ACM SIGGRAPH 2015 Los Angeles, California
2014
Åkesson, S., Odin, C., Hegedüs, R., et al. 2014. Testing Avian Compass Calibration: Comparative Experiments with Diurnal and Nocturnal Passerine Migrants in South Sweden. Biology Open 4, 1.
Export
BibTeX
@article{Hegedus2014BiologyOpen, TITLE = {Testing Avian Compass Calibration: {C}omparative Experiments with Diurnal and Nocturnal Passerine Migrants in {S}outh {S}weden}, AUTHOR = {{\AA}kesson, Susanne and Odin, Catharina and Heged{\"u}s, Ramon and Ilieva, Mihaela and Sj{\"o}holm, Christoffer and Farkas, Alexandra and Horv{\'a}th, G{\'a}bor}, LANGUAGE = {eng}, ISSN = {2046-6390}, URL = {http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4295164&tool=pmcentrez&rendertype=abstract}, DOI = {10.1242/bio.20149837}, PUBLISHER = {The Company of Biologists}, ADDRESS = {Cambridge}, YEAR = {2014}, JOURNAL = {Biology Open}, VOLUME = {4}, NUMBER = {1}, PAGES = {35--47}, }
Endnote
%0 Journal Article %A &#197;kesson, Susanne %A Odin, Catharina %A Heged&#252;s, Ramon %A Ilieva, Mihaela %A Sj&#246;holm, Christoffer %A Farkas, Alexandra %A Horv&#225;th, G&#225;bor %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Testing Avian Compass Calibration: Comparative Experiments with Diurnal and Nocturnal Passerine Migrants in South Sweden : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-CD29-7 %2 PMC4295164 %F OTHER: publisher-idBIO20149837 %R 10.1242/bio.20149837 %U http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4295164&tool=pmcentrez&rendertype=abstract %7 2014-12-12 %D 2014 %8 12.12.2014 %K Erithacus rubecula %J Biology Open %V 4 %N 1 %& 35 %P 35 - 47 %I The Company of Biologists %C Cambridge %@ false
Athukorala, K., Oulasvirta, A., Glowacka, D., Vreeken, J., and Jaccuci, G. 2014a. Interaction Model to Predict Subjective-specificity of Search Results. UMAP 2014 Extended Proceedings, CEUR-WS.org.
Export
BibTeX
@inproceedings{atukorala:14:interaction, TITLE = {Interaction Model to Predict Subjective-specificity of Search Results}, AUTHOR = {Athukorala, Kumaripaba and Oulasvirta, Antti and Glowacka, Dorata and Vreeken, Jilles and Jaccuci, Giulio}, LANGUAGE = {eng}, URL = {http://ceur-ws.org/Vol-1181/umap2014_lateresults_01.pdf; urn:nbn:de:0074-1181-4}, PUBLISHER = {CEUR-WS.org}, YEAR = {2014}, BOOKTITLE = {UMAP 2014 Extended Proceedings}, EDITOR = {Cantador, Iv{\'a}n and Chi, Min and Farzan, Rosta and J{\"a}schke, Robert}, PAGES = {69--74}, SERIES = {CEUR Workshop Proceedings}, VOLUME = {1181}, ADDRESS = {Aalborg, Denmark}, }
Endnote
%0 Conference Proceedings %A Athukorala, Kumaripaba %A Oulasvirta, Antti %A Glowacka, Dorata %A Vreeken, Jilles %A Jaccuci, Giulio %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Interaction Model to Predict Subjective-specificity of Search Results : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5397-D %U http://ceur-ws.org/Vol-1181/umap2014_lateresults_01.pdf %D 2014 %B 22nd Conference on User Modeling, Adaptation, and Personalization %Z date of event: 2014-07-07 - 2014-07-11 %C Aalborg, Denmark %B UMAP 2014 Extended Proceedings %E Cantador, Iv&#225;n; Chi, Min; Farzan, Rosta; J&#228;schke, Robert %P 69 - 74 %I CEUR-WS.org %B CEUR Workshop Proceedings %N 1181 %U http://ceur-ws.org/Vol-1181/umap2014_lateresults_01.pdf
Athukorala, K., Oulasvirta, A., Glowacka, D., Vreeken, J., and Jaccuci, G. 2014b. Supporting Exploratory Search Through User Modelling. UMAP 2014 Extended Proceedings (PIA 2014 in conjunction with UMAP 2014), CEUR-WS.org.
Export
BibTeX
@inproceedings{atukorala:14:supporting, TITLE = {Supporting Exploratory Search Through User Modelling}, AUTHOR = {Athukorala, Kumaripaba and Oulasvirta, Antti and Glowacka, Dorata and Vreeken, Jilles and Jaccuci, Giulio}, LANGUAGE = {eng}, ISSN = {1613-0073}, URL = {http://ceur-ws.org/Vol-1181/pia2014_paper_04.pdf; urn:nbn:de:0074-1181-4; http://ceur-ws.org/Vol-1181/pia2014_proceedings.pdf}, PUBLISHER = {CEUR-WS.org}, YEAR = {2014}, BOOKTITLE = {UMAP 2014 Extended Proceedings (PIA 2014 in conjunction with UMAP 2014)}, EDITOR = {Cantador, Iv{\'a}n and Chi, Min and Farzan, Rosta and J{\"a}schke, Robert}, PAGES = {1--47}, SERIES = {CEUR Workshop Proceedings}, VOLUME = {1181}, ADDRESS = {Aalborg, Denmark}, }
Endnote
%0 Conference Proceedings %A Athukorala, Kumaripaba %A Oulasvirta, Antti %A Glowacka, Dorata %A Vreeken, Jilles %A Jaccuci, Giulio %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Supporting Exploratory Search Through User Modelling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-538C-7 %U http://ceur-ws.org/Vol-1181/pia2014_paper_04.pdf %D 2014 %B Joint Workshop on Personalised Information Access %Z date of event: 2014-07-07 - 2014-07-07 %C Aalborg, Denmark %B UMAP 2014 Extended Proceedings %E Cantador, Iv&#225;n; Chi, Min; Farzan, Rosta; J&#228;schke, Robert %P 1 - 47 %I CEUR-WS.org %B CEUR Workshop Proceedings %N 1181 %@ false %U http://ceur-ws.org/Vol-1181/pia2014_paper_04.pdf
Athukorala, K., Oulasvirta, A., Glowacka, D., Vreeken, J., and Jaccuci, G. 2014c. Narrow or Broad? Estimating Subjective Specificity in Exploratory Search. CIKM’14, 23rd ACM International Conference on Information and Knowledge Management, ACM.
Export
BibTeX
@inproceedings{atukorala:14:foraging, TITLE = {Narrow or Broad? {Estimating} Subjective Specificity in Exploratory Search}, AUTHOR = {Athukorala, Kumaripaba and Oulasvirta, Antti and Glowacka, Dorata and Vreeken, Jilles and Jaccuci, Giulio}, LANGUAGE = {eng}, ISBN = {978-1-4503-2598-1}, DOI = {10.1145/2661829.2661904}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {CIKM'14, 23rd ACM International Conference on Information and Knowledge Management}, EDITOR = {Li, Jianzhong and Wang, X. Sean and Garofalakis, Minos and Soboroff, Ian and Suel, Torsten and Wang, Min}, PAGES = {819--828}, ADDRESS = {Shanghai, China}, }
Endnote
%0 Conference Proceedings %A Athukorala, Kumaripaba %A Oulasvirta, Antti %A Glowacka, Dorata %A Vreeken, Jilles %A Jaccuci, Giulio %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Narrow or Broad? Estimating Subjective Specificity in Exploratory Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-53A1-6 %R 10.1145/2661829.2661904 %D 2014 %B 23rd ACM International Conference on Information and Knowledge Management %Z date of event: 2014-11-03 - 2014-11-07 %C Shanghai, China %B CIKM'14 %E Li, Jianzhong; Wang, X. Sean; Garofalakis, Minos; Soboroff, Ian; Suel, Torsten; Wang, Min %P 819 - 828 %I ACM %@ 978-1-4503-2598-1
Bachynskyi, M., Oulasvirta, A., Palmas, G., and Weinkauf, T. 2014. Is Motion-capture-based Biomechanical Simulation Valid for HCI Studies? Study and Implications. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Abstract
Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further.
Export
BibTeX
@inproceedings{bachynskyi14a, TITLE = {Is Motion-capture-based Biomechanical Simulation Valid for {HCI} Studies? {Study} and Implications}, AUTHOR = {Bachynskyi, Myroslav and Oulasvirta, Antti and Palmas, Gregorio and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, URL = {http://doi.acm.org/10.1145/2556288.2557027}, DOI = {10.1145/2556288.2557027}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further.}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {3215--3224}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Bachynskyi, Myroslav %A Oulasvirta, Antti %A Palmas, Gregorio %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Is Motion-capture-based Biomechanical Simulation Valid for HCI Studies? Study and Implications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D2D-8 %R 10.1145/2556288.2557027 %U http://doi.acm.org/10.1145/2556288.2557027 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %X Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further. %B CHI 2014 %P 3215 - 3224 %I ACM %@ 978-1-4503-2473-1
Bailly, G., Oulasvirta, A., Brumby, D.P., and Howes, A. 2014. Model of Visual Search and Selection Time in Linear Menus. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{bailly2014model, TITLE = {Model of Visual Search and Selection Time in Linear Menus}, AUTHOR = {Bailly, Gilles and Oulasvirta, Antti and Brumby, Duncan P. and Howes, Andrew}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, DOI = {10.1145/2556288.2557093}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {3865--3847}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Bailly, Gilles %A Oulasvirta, Antti %A Brumby, Duncan P. %A Howes, Andrew %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Model of Visual Search and Selection Time in Linear Menus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-C43C-9 %R 10.1145/2556288.2557093 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %B CHI 2014 %P 3865 - 3847 %I ACM %@ 978-1-4503-2473-1
Bergmann, S., Ritschel, T., and Dachsbacher, C. 2014. Interactive Appearance Editing in RGB-D Images. VMV 2014 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{BergmannVMV2014, TITLE = {Interactive Appearance Editing in {RGB-D} Images}, AUTHOR = {Bergmann, Stephan and Ritschel, Tobias and Dachsbacher, Carsten}, LANGUAGE = {eng}, DOI = {10.2312/vmv.20141269}, PUBLISHER = {Eurographics Association}, YEAR = {2014}, DATE = {2014-10}, BOOKTITLE = {VMV 2014 Vision, Modeling and Visualization}, DEBUG = {author: von Landesberger, Tatiana; author: Theisel, Holger; author: Urban, Philipp}, EDITOR = {Bender, Jan and Kuijper, Arjan}, PAGES = {1--8}, ADDRESS = {Darmstadt, Germany}, }
Endnote
%0 Conference Proceedings %A Bergmann, Stephan %A Ritschel, Tobias %A Dachsbacher, Carsten %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Interactive Appearance Editing in RGB-D Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-533B-C %R 10.2312/vmv.20141269 %D 2014 %B 19th International Workshop on Vision, Modeling and Visualization %Z date of event: 2014-10-08 - 2014-10-10 %C Darmstadt, Germany %B VMV 2014 Vision, Modeling and Visualization %E Bender, Jan; Kuijper, Arjan; von Landesberger, Tatiana; Theisel, Holger; Urban, Philipp %P 1 - 8 %I Eurographics Association
Bozkurt, N. 2014. Interacting with Five Fingernail Displays Using Hand Postures. .
Export
BibTeX
@mastersthesis{BozkurtMastersThesis2014, TITLE = {Interacting with Five Fingernail Displays Using Hand Postures}, AUTHOR = {Bozkurt, Nisa}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Bozkurt, Nisa %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Interacting with Five Fingernail Displays Using Hand Postures : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D88-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master
Brunton, A., Wand, M., Wuhrer, S., Seidel, H.-P., and Weinkauf, T. 2014a. A Low-dimensional Representation for Robust Partial Isometric Correspondences Computation. Graphical Models 76, 2.
Abstract
Intrinsic shape matching has become the standard approach for pose invariant correspondence estimation among deformable shapes. Most existing approaches assume global consistency. While global isometric matching is well understood, only a few heuristic solutions are known for partial matching. Partial matching is particularly important for robustness to topological noise, which is a common problem in real-world scanner data. We introduce a new approach to partial isometric matching based on the observation that isometries are fully determined by local information: a map of a single point and its tangent space fixes an isometry. We develop a new representation for partial isometric maps based on equivalence classes of correspondences between pairs of points and their tangent-spaces. We apply our approach to register partial point clouds and compare it to the state-of-the-art methods, where we obtain significant improvements over global methods for real-world data and stronger guarantees than previous partial matching algorithms.
Export
BibTeX
@article{brunton13, TITLE = {A Low-dimensional Representation for Robust Partial Isometric Correspondences Computation}, AUTHOR = {Brunton, Alan and Wand, Michael and Wuhrer, Stefanie and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {1524-0703}, DOI = {10.1016/j.gmod.2013.11.003}, PUBLISHER = {Academic Press}, ADDRESS = {San Diego, CA}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Intrinsic shape matching has become the standard approach for pose invariant correspondence estimation among deformable shapes. Most existing approaches assume global consistency. While global isometric matching is well understood, only a few heuristic solutions are known for partial matching. Partial matching is particularly important for robustness to topological noise, which is a common problem in real-world scanner data. We introduce a new approach to partial isometric matching based on the observation that isometries are fully determined by local information: a map of a single point and its tangent space fixes an isometry. We develop a new representation for partial isometric maps based on equivalence classes of correspondences between pairs of points and their tangent-spaces. We apply our approach to register partial point clouds and compare it to the state-of-the-art methods, where we obtain significant improvements over global methods for real-world data and stronger guarantees than previous partial matching algorithms.}, JOURNAL = {Graphical Models}, VOLUME = {76}, NUMBER = {2}, PAGES = {70--85}, }
Endnote
%0 Journal Article %A Brunton, Alan %A Wand, Michael %A Wuhrer, Stefanie %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Low-dimensional Representation for Robust Partial Isometric Correspondences Computation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F6E9-5 %R 10.1016/j.gmod.2013.11.003 %7 2013-12-15 %D 2014 %X Intrinsic shape matching has become the standard approach for pose invariant correspondence estimation among deformable shapes. Most existing approaches assume global consistency. While global isometric matching is well understood, only a few heuristic solutions are known for partial matching. Partial matching is particularly important for robustness to topological noise, which is a common problem in real-world scanner data. We introduce a new approach to partial isometric matching based on the observation that isometries are fully determined by local information: a map of a single point and its tangent space fixes an isometry. We develop a new representation for partial isometric maps based on equivalence classes of correspondences between pairs of points and their tangent-spaces. We apply our approach to register partial point clouds and compare it to the state-of-the-art methods, where we obtain significant improvements over global methods for real-world data and stronger guarantees than previous partial matching algorithms. %J Graphical Models %V 76 %N 2 %& 70 %P 70 - 85 %I Academic Press %C San Diego, CA %@ false
Brunton, A., Salazar, A., Bolkart, T., and Wuhrer, S. 2014b. Review of Statistical Shape Spaces for 3D Data with Comparative Analysis for Human Faces. Computer Vision and Image Understanding 128.
Export
BibTeX
@article{BruntonSalazarBolkartWuhrer2014, TITLE = {Review of Statistical Shape Spaces for {3D} Data with Comparative Analysis for Human Faces}, AUTHOR = {Brunton, Alan and Salazar, Augusto and Bolkart, Timo and Wuhrer, Stefanie}, LANGUAGE = {eng}, ISSN = {1077-3142}, DOI = {10.1016/j.cviu.2014.05.005}, PUBLISHER = {Academic Press}, ADDRESS = {San Diego, CA}, YEAR = {2014}, DATE = {2014}, JOURNAL = {Computer Vision and Image Understanding}, VOLUME = {128}, PAGES = {1--17}, }
Endnote
%0 Journal Article %A Brunton, Alan %A Salazar, Augusto %A Bolkart, Timo %A Wuhrer, Stefanie %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Review of Statistical Shape Spaces for 3D Data with Comparative Analysis for Human Faces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C77-8 %F ISI: 000341482400001 %R 10.1016/j.cviu.2014.05.005 %7 2014-05-27 %D 2014 %J Computer Vision and Image Understanding %V 128 %& 1 %P 1 - 17 %I Academic Press %C San Diego, CA %@ false
Dabala, L., Kellnhofer, P., Ritschel, T., et al. 2014. Manipulating Refractive and Reflective Binocular Disparity. Computer Graphics Forum (Proc. EUROGRAPHICS 2014) 33, 2.
Abstract
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.
Export
BibTeX
@article{Kellnhofer2014b, TITLE = {Manipulating Refractive and Reflective Binocular Disparity}, AUTHOR = {Dabala, Lukasz and Kellnhofer, Petr and Ritschel, Tobias and Didyk, Piotr and Templin, Krzysztof and Rokita, Przemyslaw and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12290}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {33}, NUMBER = {2}, PAGES = {53--62}, BOOKTITLE = {EUROGRAPHICS 2014}, EDITOR = {L{\'e}vy, Bruno and Kautz, Jan}, }
Endnote
%0 Journal Article %A Dabala, Lukasz %A Kellnhofer, Petr %A Ritschel, Tobias %A Didyk, Piotr %A Templin, Krzysztof %A Rokita, Przemyslaw %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Manipulating Refractive and Reflective Binocular Disparity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-EEF9-6 %R 10.1111/cgf.12290 %7 2014-06-01 %D 2014 %X Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes. %J Computer Graphics Forum %V 33 %N 2 %& 53 %P 53 - 62 %I Wiley-Blackwell %C Oxford, UK %B EUROGRAPHICS 2014 %O The European Association for Computer Graphics 35th Annual Conference ; Strasbourg, France, April 7th &#8211; 11th, 2014 EUROGRAPHICS 2014 EG 2014
Elek, O., Bauszat, P., Ritschel, T., Magnor, M., and Seidel, H.-P. 2014a. Progressive Spectral Ray Differentials. VMV 2014 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{ElekVMV2014, TITLE = {Progressive Spectral Ray Differentials}, AUTHOR = {Elek, Oskar and Bauszat, Pablo and Ritschel, Tobias and Magnor, Marcus and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-905674-74-3}, PUBLISHER = {Eurographics Association}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {VMV 2014 Vision, Modeling and Visualization}, PAGES = {151--158}, ADDRESS = {Darmstadt, Germany}, }
Endnote
%0 Conference Proceedings %A Elek, Oskar %A Bauszat, Pablo %A Ritschel, Tobias %A Magnor, Marcus %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Progressive Spectral Ray Differentials : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5176-5 %D 2014 %B 19th International Workshop on Vision, Modeling and Visualization %Z date of event: 2014-10-08 - 2014-10-10 %C Darmstadt, Germany %B VMV 2014 Vision, Modeling and Visualization %P 151 - 158 %I Eurographics Association %@ 978-3-905674-74-3
Elek, O., Ritschel, T., Dachsbacher, C., and Seidel, H.-P. 2014b. Interactive Light Scattering with Principal-ordinate Propagation. Graphics Interface 2014, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{ElekGI2014, TITLE = {Interactive Light Scattering with Principal-ordinate Propagation}, AUTHOR = {Elek, Oskar and Ritschel, Tobias and Dachsbacher, Carsten and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4822-6003-8}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {Graphics Interface 2014}, EDITOR = {Kry, Paul G. and Bunt, Andrea}, PAGES = {87--94}, ADDRESS = {Montreal, Canada}, }
Endnote
%0 Conference Proceedings %A Elek, Oskar %A Ritschel, Tobias %A Dachsbacher, Carsten %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive Light Scattering with Principal-ordinate Propagation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5181-D %D 2014 %B Graphics Interface %Z date of event: 2014-05-07 - 2014-05-09 %C Montreal, Canada %B Graphics Interface 2014 %E Kry, Paul G.; Bunt, Andrea %P 87 - 94 %I Canadian Information Processing Society %@ 978-1-4822-6003-8 %U http://people.mpi-inf.mpg.de/~oelek/Papers/PrincipalOrdinatePropagation/
Elek, O., Ritschel, T., Dachsbacher, C., and Seidel, H.-P. 2014c. Principal-ordinates Propagation for Real-time Rendering of Participating Media. Computers & Graphics 45.
Export
BibTeX
@article{ElekCAG2014, TITLE = {Principal-ordinates Propagation for Real-time Rendering of Participating Media}, AUTHOR = {Elek, Oskar and Ritschel, Tobias and Dachsbacher, Carsten and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0097-8493}, DOI = {10.1016/j.cag.2014.08.003}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2014}, DATE = {2014}, JOURNAL = {Computers \& Graphics}, VOLUME = {45}, PAGES = {28--39}, }
Endnote
%0 Journal Article %A Elek, Oskar %A Ritschel, Tobias %A Dachsbacher, Carsten %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Principal-ordinates Propagation for Real-time Rendering of Participating Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-516D-C %R 10.1016/j.cag.2014.08.003 %7 2014-09-06 %D 2014 %J Computers & Graphics %V 45 %& 28 %P 28 - 39 %I Elsevier %C Amsterdam %@ false
Elek, O., Bauszat, P., Ritschel, T., Magnor, M., and Seidel, H.-P. 2014d. Spectral Ray Differentials. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2014) 33, 4.
Export
BibTeX
@article{Elek2014EGSR, TITLE = {Spectral Ray Differentials}, AUTHOR = {Elek, Oskar and Bauszat, Pablo and Ritschel, Tobias and Magnor, Marcus and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12418}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {33}, NUMBER = {4}, PAGES = {113--122}, BOOKTITLE = {Eurographics Symposium on Rendering 2014}, EDITOR = {Wojciech, Jarosz and Peers, Pieter}, }
Endnote
%0 Journal Article %A Elek, Oskar %A Bauszat, Pablo %A Ritschel, Tobias %A Magnor, Marcus %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Spectral Ray Differentials : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4A77-B %R 10.1111/cgf.12418 %7 2014 %D 2014 %J Computer Graphics Forum %V 33 %N 4 %& 113 %P 113 - 122 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2014 %O Eurographics Symposium on Rendering 2014 EGSR 2014 Lyon, France, June 25th - 27th, 2014
Feit, A.M. and Oulasvirta, A. 2014. PianoText: Redesigning the Piano Keyboard for Text Entry. DIS’14, ACM SIGCHI Conference on Designing Interactive Systems, ACM.
Export
BibTeX
@inproceedings{feit2014pianotext, TITLE = {{PianoText}: {Redesigning} the Piano Keyboard for Text Entry}, AUTHOR = {Feit, Anna Maria and Oulasvirta, Antti}, LANGUAGE = {eng}, ISBN = {978-1-4503-2902-6}, DOI = {10.1145/2598510.2598547}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {DIS'14, ACM SIGCHI Conference on Designing Interactive Systems}, PAGES = {1045--1054}, ADDRESS = {Vancouver, Canada}, }
Endnote
%0 Conference Proceedings %A Feit, Anna Maria %A Oulasvirta, Antti %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T PianoText: Redesigning the Piano Keyboard for Text Entry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-69FC-5 %R 10.1145/2598510.2598547 %D 2014 %B ACM SIGCHI Conference on Designing Interactive Systems %Z date of event: 2014-06-21 - 2014-06-25 %C Vancouver, Canada %B DIS'14 %P 1045 - 1054 %I ACM %@ 978-1-4503-2902-6
Garrido, P., Valgaerts, L., Rehmsen, O., Thormaehlen, T., Peréz, P., and Theobalt, C. 2014. Automatic Face Reenactment. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), IEEE Computer Society.
Export
BibTeX
@inproceedings{Garrido2014, TITLE = {Automatic Face Reenactment}, AUTHOR = {Garrido, Pablo and Valgaerts, Levi and Rehmsen, Ole and Thormaehlen, Thorsten and Per{\'e}z, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4799-5117-8}, DOI = {10.1109/CVPR.2014.537}, PUBLISHER = {IEEE Computer Society}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014)}, PAGES = {4217--4224}, ADDRESS = {Columbus, OH, USA}, }
Endnote
%0 Conference Proceedings %A Garrido, Pablo %A Valgaerts, Levi %A Rehmsen, Ole %A Thormaehlen, Thorsten %A Per&#233;z, Patrick %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Automatic Face Reenactment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5155-F %R 10.1109/CVPR.2014.537 %D 2014 %B 2014 IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2014-06-23 - 2014-06-28 %C Columbus, OH, USA %B 2014 IEEE Conference on Computer Vision and Pattern Recognition %P 4217 - 4224 %I IEEE Computer Society %@ 978-1-4799-5117-8
Gong, N.-W., Steimle, J., Olberding, S., et al. 2014. PrintSense: A Versatile Sensing Technique to Support Multimodal Flexible Surface Interaction. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{Gong14, TITLE = {{PrintSense}: a versatile sensing technique to support multimodal flexible surface interaction}, AUTHOR = {Gong, Nan-Wei and Steimle, J{\"u}rgen and Olberding, Simon and Hodges, Steve and Gilllian, Nicholas Edward and Kawahara, Yoshihiro and Paradiso, Joseph A.}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, URL = {http://doi.acm.org/10.1145/2556288.2557239}, DOI = {10.1145/2556288.2557173}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {1407--1410}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Gong, Nan-Wei %A Steimle, J&#252;rgen %A Olberding, Simon %A Hodges, Steve %A Gilllian, Nicholas Edward %A Kawahara, Yoshihiro %A Paradiso, Joseph A. %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T PrintSense: A Versatile Sensing Technique to Support Multimodal Flexible Surface Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFAF-E %R 10.1145/2556288.2557173 %U http://doi.acm.org/10.1145/2556288.2557239 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %B CHI 2014 %P 1407 - 1410 %I ACM %@ 978-1-4503-2473-1
Gryaditskaya, Y., Pouli, T., Reinhard, E., and Seidel, H.-P. 2014. Sky Based Light Metering for High Dynamic Range Images. Computer Graphics Forum (Proc. Pacific Graphics 2014) 33, 7.
Abstract
Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.
Export
BibTeX
@article{CGF:Gryad:14, TITLE = {Sky Based Light Metering for High Dynamic Range Images}, AUTHOR = {Gryaditskaya, Yulia and Pouli, Tania and Reinhard, Erik and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.12474}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel---effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {33}, NUMBER = {7}, PAGES = {61--69}, BOOKTITLE = {22nd Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2014)}, }
Endnote
%0 Journal Article %A Gryaditskaya, Yulia %A Pouli, Tania %A Reinhard, Erik %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Sky Based Light Metering for High Dynamic Range Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C64-1 %R 10.1111/cgf.12474 %7 2014 %D 2014 %X Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel&#8212;effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design. %J Computer Graphics Forum %V 33 %N 7 %& 61 %P 61 - 69 %I Wiley-Blackwell %C Oxford, UK %@ false %B 22nd Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2014 PG 2014 8 to 10 Oct 2014, Seoul, South Korea
Guenther, D., Reininghaus, J., Seidel, H.-P., and Weinkauf, T. 2014. Notes on the Simplification of the Morse-Smale Complex. Topological Methods in Data Analysis and Visualization III (TopoInVis 2013), Springer.
Abstract
The Morse-Smale complex can be either explicitly or implicitly represented. Depending on the type of representation, the simplification of the Morse-Smale complex works differently. In the explicit representation, the Morse-Smale complex is directly simplified by explicitly reconnecting the critical points during the simplification. In the implicit representation, on the other hand, the Morse-Smale complex is given by a combinatorial gradient field. In this setting, the simplification changes the combinatorial flow, which yields an indirect simplification of the Morse-Smale complex. The topological complexity of the Morse-Smale complex is reduced in both representations. However, the simplifications generally yield different results. In this paper, we emphasize the differences between these two representations, and provide a high-level discussion about their advantages and limitations.
Export
BibTeX
@inproceedings{guenther13a, TITLE = {Notes on the Simplification of the {Morse}-{Smale} Complex}, AUTHOR = {Guenther, David and Reininghaus, Jan and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-3-319-04098-1}, DOI = {10.1007/978-3-319-04099-8_9}, PUBLISHER = {Springer}, YEAR = {2013}, DATE = {2014}, ABSTRACT = {The Morse-Smale complex can be either explicitly or implicitly represented. Depending on the type of representation, the simplification of the Morse-Smale complex works differently. In the explicit representation, the Morse-Smale complex is directly simplified by explicitly reconnecting the critical points during the simplification. In the implicit representation, on the other hand, the Morse-Smale complex is given by a combinatorial gradient field. In this setting, the simplification changes the combinatorial flow, which yields an indirect simplification of the Morse-Smale complex. The topological complexity of the Morse-Smale complex is reduced in both representations. However, the simplifications generally yield different results. In this paper, we emphasize the differences between these two representations, and provide a high-level discussion about their advantages and limitations.}, BOOKTITLE = {Topological Methods in Data Analysis and Visualization III (TopoInVis 2013)}, EDITOR = {Bremer, Peer-Timo and Hotz, Ingrid and Pascucci, Valerio and Peikert, Ronald}, PAGES = {135--150}, SERIES = {Mathematics and Visualization}, ADDRESS = {Davis, CA, USA}, }
Endnote
%0 Conference Proceedings %A Guenther, David %A Reininghaus, Jan %A Seidel, Hans-Peter %A Weinkauf, Tino %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Notes on the Simplification of the Morse-Smale Complex : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-52F3-3 %R 10.1007/978-3-319-04099-8_9 %D 2014 %B TopoInVis %Z date of event: 2013-03-04 - 2013-03-06 %C Davis, CA, USA %X The Morse-Smale complex can be either explicitly or implicitly represented. Depending on the type of representation, the simplification of the Morse-Smale complex works differently. In the explicit representation, the Morse-Smale complex is directly simplified by explicitly reconnecting the critical points during the simplification. In the implicit representation, on the other hand, the Morse-Smale complex is given by a combinatorial gradient field. In this setting, the simplification changes the combinatorial flow, which yields an indirect simplification of the Morse-Smale complex. The topological complexity of the Morse-Smale complex is reduced in both representations. However, the simplifications generally yield different results. In this paper, we emphasize the differences between these two representations, and provide a high-level discussion about their advantages and limitations. %B Topological Methods in Data Analysis and Visualization III %E Bremer, Peer-Timo; Hotz, Ingrid; Pascucci, Valerio; Peikert, Ronald %P 135 - 150 %I Springer %@ 978-3-319-04098-1 %B Mathematics and Visualization
Günther, D., Jacobson, A., Reininghaus, J., Seidel, H.-P., Sorkine-Hornung, O., and Weinkauf, T. 2014a. Fast and Memory-efficient Topological Denoising of 2D and 3D Scalar Fields. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS 2014) 20, 12.
Export
BibTeX
@article{guenther14c, TITLE = {Fast and Memory-efficient Topological Denoising of {2D} and {3D} Scalar Fields}, AUTHOR = {G{\"u}nther, David and Jacobson, Alec and Reininghaus, Jan and Seidel, Hans-Peter and Sorkine-Hornung, Olga and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2014.2346432}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2014}, DATE = {2014-12}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS)}, VOLUME = {20}, NUMBER = {12}, PAGES = {2585--2594}, BOOKTITLE = {IEEE Visual Analytics Science \& Technology Conference, IEEE Information Visualization Conference, and IEEE Scientific Visualization Conference Proceedings 2014}, DEBUG = {author: Ebert, David; author: Hauser, Helwig; author: Heer, Jeffrey; author: North, Chris; author: Tory, Melanie; author: Qu, Huamin; author: Shen, Han-Wei; author: Ynnerman, Anders}, EDITOR = {Chen, Min}, }
Endnote
%0 Journal Article %A G&#252;nther, David %A Jacobson, Alec %A Reininghaus, Jan %A Seidel, Hans-Peter %A Sorkine-Hornung, Olga %A Weinkauf, Tino %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Fast and Memory-efficient Topological Denoising of 2D and 3D Scalar Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5349-E %R 10.1109/TVCG.2014.2346432 %7 2014 %D 2014 %J IEEE Transactions on Visualization and Computer Graphics %V 20 %N 12 %& 2585 %P 2585 - 2594 %I IEEE Computer Society %C Los Alamitos, CA %@ false %B IEEE Visual Analytics Science & Technology Conference, IEEE Information Visualization Conference, and IEEE Scientific Visualization Conference Proceedings 2014 %O Proceedings 2014 ; Paris, France, 9&#8211;14 November 2014 IEEE VIS 2014
Günther, J. 2014. Ray Tracing of Dynamic Scenes. urn:nbn:de:bsz:291-scidok-59295.
Export
BibTeX
@phdthesis{GuentherPhD2014, TITLE = {Ray Tracing of Dynamic Scenes}, AUTHOR = {G{\"u}nther, Johannes}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59295}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A G&#252;nther, Johannes %Y Slusallek, Philipp %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Ray Tracing of Dynamic Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C0-5 %U urn:nbn:de:bsz:291-scidok-59295 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5929/
Günther, T., Schulze, M., Esturo, J.M., Rössl, C., and Theisel, H. 2014b. Opacity Optimization for Surfaces. Computer Graphics Forum (Proc. EuroVis 2014) 33, 3.
Export
BibTeX
@article{CGF:CGF12357, TITLE = {Opacity Optimization for Surfaces}, AUTHOR = {G{\"u}nther, Tobias and Schulze, Maik and Esturo, Janick Martinez and R{\"o}ssl, Christian and Theisel, Holger}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.12357}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. EuroVis)}, VOLUME = {33}, NUMBER = {3}, PAGES = {11--20}, BOOKTITLE = {Eurographics Conference on Visualization 2014 (EuroVis 2014)}, EDITOR = {Carr, Hamish and Rheingans, Penny and Schumann, Heidrun}, }
Endnote
%0 Journal Article %A G&#252;nther, Tobias %A Schulze, Maik %A Esturo, Janick Martinez %A R&#246;ssl, Christian %A Theisel, Holger %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Opacity Optimization for Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EB80-6 %R 10.1111/cgf.12357 %7 2014-07-12 %D 2014 %K Categories and Subject Descriptors (according to ACM CCS), I.3.3 [Computer Graphics]: Three&#8208;Dimensional Graphics and Realism&#8212;Display Algorithms %J Computer Graphics Forum %V 33 %N 3 %& 11 %P 11 - 20 %I Wiley-Blackwell %C Oxford, UK %@ false %B Eurographics Conference on Visualization 2014 %O EuroVis 2014 Swansea, Wales, UK, June 9 - 13, 2014
Horváth, G., Blahó, M., Egri, A., Hegedüs, R., and Szél, G. 2014a. Circular Polarization Vision of Scarab Beetles. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{2014:AnimalSciences:Hegedues6, TITLE = {Circular Polarization Vision of Scarab Beetles}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Blah{\'o}, M. and Egri, A. and Heged{\"u}s, Ramon and Sz{\'e}l, Gy}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_6}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {147--170}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Blah&#243;, M. %A Egri, A. %A Heged&#252;s, Ramon %A Sz&#233;l, Gy %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Circular Polarization Vision of Scarab Beetles : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-839D-7 %R 10.1007/978-3-642-54718-8_6 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 147 - 170 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Horváth, G., Barta, A., and Hegedüs, R. 2014b. Polarization of the Sky. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{HorvathPolarizationSky2014, TITLE = {Polarization of the Sky}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Barta, Andr{\'a}s and Heged{\"u}s, Ramon}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_18}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {367--406}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Barta, Andr&#225;s %A Heged&#252;s, Ramon %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Polarization of the Sky : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-22D0-C %R 10.1007/978-3-642-54718-8_18 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 367 - 406 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Horváth, G. and Hegedüs, R. 2014a. Polarization Characteristics of Forest Canopies with Biological Implications. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{HorvathPolarization2014, TITLE = {Polarization Characteristics of Forest Canopies with Biological Implications}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Heged{\"u}s, Ramon}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_17}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {345--365}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Heged&#252;s, Ramon %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Polarization Characteristics of Forest Canopies with Biological Implications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-22CE-3 %R 10.1007/978-3-642-54718-8_17 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 345 - 365 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Horváth, G. and Hegedüs, R. 2014b. Polarization-Induced False Colours. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{HorvathColours2014, TITLE = {Polarization-Induced False Colours}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Heged{\"u}s, Ramon}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_13}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {293--302}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Heged&#252;s, Ramon %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Polarization-Induced False Colours : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-22CC-7 %R 10.1007/978-3-642-54718-8_13 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 293 - 302 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Ihrke, I. 2014. Opacity. In: Computer Vision. Springer, Berlin.
Export
BibTeX
@incollection{Ihrke2011, TITLE = {Opacity}, AUTHOR = {Ihrke, Ivo}, LANGUAGE = {eng}, ISBN = {978-0-387-30771-8}, DOI = {10.1007/978-0-387-31439-6_564}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {Computer Vision}, PAGES = {562--564}, }
Endnote
%0 Book Section %A Ihrke, Ivo %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Opacity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-2556-A %R 10.1007/978-0-387-31439-6_564 %D 2014 %B Computer Vision %P 562 - 564 %I Springer %C Berlin %@ 978-0-387-30771-8
Jain, A. 2014. Data-driven Methods for Interactive Visual Content Creation and Manipulation. urn:nbn:de:bsz:291-scidok-58210.
Export
BibTeX
@phdthesis{PhDThesis:JainArjun, TITLE = {Data-driven Methods for Interactive Visual Content Creation and Manipulation}, AUTHOR = {Jain, Arjun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58210}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Jain, Arjun %Y Thorm&#228;hlen, Thorsten %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Data-driven Methods for Interactive Visual Content Creation and Manipulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EB82-2 %U urn:nbn:de:bsz:291-scidok-58210 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P XV, 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5821/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Karrenbauer, A. and Oulasvirta, A. 2014. Improvements to Keyboard Optimization with Integer Programming. UIST’14, 27th Annual ACM Symposium on User Interface Software and Technology, ACM.
Export
BibTeX
@inproceedings{KO2014, TITLE = {Improvements to Keyboard Optimization with Integer Programming}, AUTHOR = {Karrenbauer, Andreas and Oulasvirta, Antti}, LANGUAGE = {eng}, ISBN = {978-1-4503-3069-5}, DOI = {10.1145/2642918.2647382}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {UIST'14, 27th Annual ACM Symposium on User Interface Software and Technology}, EDITOR = {Benko, Hrvoje and Dontcheva, Mira and Wigdor, Daniel}, PAGES = {621--626}, ADDRESS = {Honolulu, HI, USA}, }
Endnote
%0 Conference Proceedings %A Karrenbauer, Andreas %A Oulasvirta, Antti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Improvements to Keyboard Optimization with Integer Programming : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-43F4-B %R 10.1145/2642918.2647382 %D 2014 %B 27th Annual ACM Symposium on User Interface Software and Technology %Z date of event: 2014-10-05 - 2014-10-08 %C Honolulu, HI, USA %B UIST'14 %E Benko, Hrvoje; Dontcheva, Mira; Wigdor, Daniel %P 621 - 626 %I ACM %@ 978-1-4503-3069-5
Kawahara, Y., Hodges, S., Olberding, S., Steimle, J., and Gong, N.-W. 2014. Building Functional Prototypes Using Conductive Inkjet Printing. IEEE Pervasive Computing 13, 3.
Export
BibTeX
@article{6850258, TITLE = {Building Functional Prototypes Using Conductive Inkjet Printing}, AUTHOR = {Kawahara, Yoshihiro and Hodges, Steve and Olberding, Simon and Steimle, J{\"u}rgen and Gong, Nan-Wei}, LANGUAGE = {eng}, ISSN = {1536-1268}, DOI = {10.1109/MPRV.2014.41}, PUBLISHER = {IEEE}, ADDRESS = {Piscataway, NJ}, YEAR = {2014}, DATE = {2014}, JOURNAL = {IEEE Pervasive Computing}, VOLUME = {13}, NUMBER = {3}, PAGES = {30--38}, }
Endnote
%0 Journal Article %A Kawahara, Yoshihiro %A Hodges, Steve %A Olberding, Simon %A Steimle, J&#252;rgen %A Gong, Nan-Wei %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Building Functional Prototypes Using Conductive Inkjet Printing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFA6-F %R 10.1109/MPRV.2014.41 %7 2014 %D 2014 %K flexible electronics;ink jet printing;printed circuit manufacture;3D printers;conductive circuits;conductive inkjet printing process;consumer-grade inkjet printer;custom-made subcircuits;electronic circuits;fabrication techniques;flexible substrate;functional device prototypes;off-the-shelf electronic components;pervasive computing;printed conductive patterns;prototyping mechanical structures;proximity-sensitive surfaces;single wiring layer;touch-sensitive surfaces;Capacitive sensors;Digital systems;Electronic equipment;Fabrication;Ink jet printing;Printers;Resistance;Substrates;Virtual manufacturing;capacitive sensors;conductive ink;digital fabrication;inkjet printing;pervasive computing;rapid prototyping;touch sensing %J IEEE Pervasive Computing %V 13 %N 3 %& 30 %P 30 - 38 %I IEEE %C Piscataway, NJ %@ false
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2014a. Improving Perception of Binocular Stereo Motion on 3D Display Devices. Stereoscopic Displays and Applications XXV, SPIE.
Abstract
This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations.
Export
BibTeX
@inproceedings{Kellnhofer2014a, TITLE = {Improving Perception of Binocular Stereo Motion on {3D} Display Devices}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0277-786X}, ISBN = {9780819499288}, DOI = {10.1117/12.2032389}, PUBLISHER = {SPIE}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations.}, BOOKTITLE = {Stereoscopic Displays and Applications XXV}, EDITOR = {Woods, Andrew J. and Holliman, Nicolas S. and Favalora, Gregg E.}, PAGES = {1--11}, EID = {901116}, SERIES = {Proceedings of SPIE-IS\&T Electronic Imaging}, VOLUME = {9011}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Improving Perception of Binocular Stereo Motion on 3D Display Devices : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-318D-7 %R 10.1117/12.2032389 %D 2014 %B Stereoscopic Displays and Applications XXV %Z date of event: 2014-02-03 - 2014-02-05 %C San Francisco, CA, USA %X This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations. %B Stereoscopic Displays and Applications XXV %E Woods, Andrew J.; Holliman, Nicolas S.; Favalora, Gregg E. %P 1 - 11 %Z sequence number: 901116 %I SPIE %@ 9780819499288 %B Proceedings of SPIE-IS&T Electronic Imaging %N 9011 %@ false
Kellnhofer, P., Ritschel, T., Vangorp, P., Myszkowski, K., and Seidel, H.-P. 2014b. Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision. ACM Transactions on Applied Perception 11, 3.
Export
BibTeX
@article{kellnhofer:2014c:DarkStereo, TITLE = {Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Vangorp, Peter and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1544-3558}, DOI = {10.1145/2644813}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, DATE = {2014}, JOURNAL = {ACM Transactions on Applied Perception}, VOLUME = {11}, NUMBER = {3}, EID = {15}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Vangorp, Peter %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EE0E-E %R 10.1145/2644813 %7 2014 %D 2014 %J ACM Transactions on Applied Perception %V 11 %N 3 %Z sequence number: 15 %I ACM %C New York, NY %@ false
Khattab, D., Theobalt, C., Hussein, A.S., and Tolba, M.F. 2014. Modified GrabCut for Human Face Segmentation. Ain Shams Engineering Journal 5, 4.
Abstract
Abstract GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.
Export
BibTeX
@article{Khattab20141083, TITLE = {Modified {GrabCut} for Human Face Segmentation}, AUTHOR = {Khattab, Dina and Theobalt, Christian and Hussein, Ashraf S. and Tolba, Mohamed F.}, LANGUAGE = {eng}, ISSN = {2090-4479}, DOI = {10.1016/j.asej.2014.04.012}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Abstract GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.}, JOURNAL = {Ain Shams Engineering Journal}, VOLUME = {5}, NUMBER = {4}, PAGES = {1083--1091}, }
Endnote
%0 Journal Article %A Khattab, Dina %A Theobalt, Christian %A Hussein, Ashraf S. %A Tolba, Mohamed F. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Modified GrabCut for Human Face Segmentation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF83-F %R 10.1016/j.asej.2014.04.012 %7 2014 %D 2014 %X Abstract GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation. %K Image segmentation %J Ain Shams Engineering Journal %V 5 %N 4 %& 1083 %P 1083 - 1091 %I Elsevier %C Amsterdam %@ false %U http://www.sciencedirect.com/science/article/pii/S2090447914000562
Kim, K.I., Tompkin, J., and Theobalt, C. 2014. Local High-order Regularization on Data Manifolds. Max-Planck Institut für Informatik, Saarbrücken.
Export
BibTeX
@techreport{KimTR2014, TITLE = {Local High-order Regularization on Data Manifolds}, AUTHOR = {Kim, Kwang In and Tompkin, James and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2014-4-001}, INSTITUTION = {Max-Planck Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, TYPE = {Research Report}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Local High-order Regularization on Data Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-B210-7 %Y Max-Planck Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2014 %P 12 p. %B Research Report %@ false
Klehm, O., Ihrke, I., Seidel, H.-P., and Eisemann, E. 2014a. Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor. IEEE Transactions on Visualization and Computer Graphics 20, 7.
Export
BibTeX
@article{PLM-tvcg_Klehm2014, TITLE = {Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor}, AUTHOR = {Klehm, Oliver and Ihrke, Ivo and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2014.13}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2014}, DATE = {2014-07}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics}, VOLUME = {20}, NUMBER = {7}, PAGES = {983--995}, }
Endnote
%0 Journal Article %A Klehm, Oliver %A Ihrke, Ivo %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-51CA-B %R 10.1109/TVCG.2014.13 %7 2014 %D 2014 %K rendering (computer graphics);artistic control;environmental lighting;image component;lighting manipulations;noise function parameters;painting metaphor;property manipulations;realistic rendering;static volume stylization;static volumes;tomographic reconstruction;volume appearance;volume properties;volumetric rendering equation;Equations;Image reconstruction;Lighting;Mathematical model;Optimization;Rendering (computer graphics);Scattering;Artist control;optimization;participating media %J IEEE Transactions on Visualization and Computer Graphics %V 20 %N 7 %& 983 %P 983 - 995 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Klehm, O., Seidel, H.-P., and Eisemann, E. 2014b. Filter-based Real-time Single Scattering using Rectified Shadow Maps. Journal of Computer Graphics Techniques 3, 3.
Export
BibTeX
@article{fbss_jcgtKlehm2014, TITLE = {Filter-based Real-time Single Scattering using Rectified Shadow Maps}, AUTHOR = {Klehm, Oliver and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISSN = {2331-7418}, URL = {http://jcgt.org/published/0003/03/02/}, PUBLISHER = {Williams College}, ADDRESS = {Williamstown, MA}, YEAR = {2014}, DATE = {2014-08}, JOURNAL = {Journal of Computer Graphics Techniques}, VOLUME = {3}, NUMBER = {3}, PAGES = {7--34}, }
Endnote
%0 Journal Article %A Klehm, Oliver %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Filter-based Real-time Single Scattering using Rectified Shadow Maps : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-51B3-E %U http://jcgt.org/published/0003/03/02/ %7 2014 %D 2014 %J Journal of Computer Graphics Techniques %O JCGT %V 3 %N 3 %& 7 %P 7 - 34 %I Williams College %C Williamstown, MA %@ false %U http://jcgt.org/published/0003/03/02/
Klehm, O., Seidel, H.-P., and Eisemann, E. 2014c. Prefiltered Single Scattering. Proceedings I3D 2014, ACM.
Export
BibTeX
@inproceedings{Klehm:2014:PSS:2556700.2556704, TITLE = {Prefiltered Single Scattering}, AUTHOR = {Klehm, Oliver and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISBN = {978-1-4503-2717-6}, DOI = {10.1145/2556700.2556704}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {Proceedings I3D 2014}, EDITOR = {Keyser, John and Sander, Pedro}, PAGES = {71--78}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Klehm, Oliver %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Prefiltered Single Scattering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-51C5-6 %R 10.1145/2556700.2556704 %D 2014 %B 18th Meeting of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games %Z date of event: 2014-03-14 - 2014-03-16 %C San Francisco, CA, USA %K participating media, scattering, shadow test %B Proceedings I3D 2014 %E Keyser, John; Sander, Pedro %P 71 - 78 %I ACM %@ 978-1-4503-2717-6
Konz, V. and Schuricht, F. 2014. Contact with a Corner for Nonlinearly Elastic Rods. Journal of Elasticity 117, 1.
Export
BibTeX
@article{KonzSchuricht2014, TITLE = {Contact with a Corner for Nonlinearly Elastic Rods}, AUTHOR = {Konz, Verena and Schuricht, Friedemann}, LANGUAGE = {eng}, ISSN = {0374-3535}, DOI = {10.1007/s10659-013-9462-1}, PUBLISHER = {Springer}, ADDRESS = {Dordrecht}, YEAR = {2014}, DATE = {2014}, JOURNAL = {Journal of Elasticity}, VOLUME = {117}, NUMBER = {1}, PAGES = {1--20}, }
Endnote
%0 Journal Article %A Konz, Verena %A Schuricht, Friedemann %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Contact with a Corner for Nonlinearly Elastic Rods : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C88-2 %F ISI: 000341864200001 %R 10.1007/s10659-013-9462-1 %7 2013 %D 2014 %J Journal of Elasticity %V 117 %N 1 %& 1 %P 1 - 20 %I Springer %C Dordrecht %@ false
Kozlov, Y. 2014. Analysis of Energy Regularization for Harmonic Surface Deformation. .
Abstract
Recently it has been shown that regularization can be beneficial for a variety of geometry processing methods on discretized domains. Linear energy regularization, proposed by Martinez Esturo et al. [MRT14], creates a global, linear regularization term which is strongly coupled with the deformation energy. It can be computed interactively, with little impact on runtime. This work analyzes the effects of linear energy regularization on harmonic surface deformation, proposed by Zayer et al. [ZRKS05]. Harmonic surface deformation is a variational technique for gradient domain surface manipulation. This work demonstrate that linear energy regularization can overcome some of the inherent limitations associated with this technique, can effectively reduce common artifacts associated with this method, eliminating the need for costly non-linear regularization, and expanding the modeling capabilities for harmonic surface deformation.
Export
BibTeX
@mastersthesis{Kozlov2014, TITLE = {Analysis of Energy Regularization for Harmonic Surface Deformation}, AUTHOR = {Kozlov, Yeara}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Recently it has been shown that regularization can be beneficial for a variety of geometry processing methods on discretized domains. Linear energy regularization, proposed by Martinez Esturo et al. [MRT14], creates a global, linear regularization term which is strongly coupled with the deformation energy. It can be computed interactively, with little impact on runtime. This work analyzes the effects of linear energy regularization on harmonic surface deformation, proposed by Zayer et al. [ZRKS05]. Harmonic surface deformation is a variational technique for gradient domain surface manipulation. This work demonstrate that linear energy regularization can overcome some of the inherent limitations associated with this technique, can effectively reduce common artifacts associated with this method, eliminating the need for costly non-linear regularization, and expanding the modeling capabilities for harmonic surface deformation.}, }
Endnote
%0 Thesis %A Kozlov, Yeara %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Analysis of Energy Regularization for Harmonic Surface Deformation : %U http://hdl.handle.net/11858/00-001M-0000-001A-34CB-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master %X Recently it has been shown that regularization can be beneficial for a variety of geometry processing methods on discretized domains. Linear energy regularization, proposed by Martinez Esturo et al. [MRT14], creates a global, linear regularization term which is strongly coupled with the deformation energy. It can be computed interactively, with little impact on runtime. This work analyzes the effects of linear energy regularization on harmonic surface deformation, proposed by Zayer et al. [ZRKS05]. Harmonic surface deformation is a variational technique for gradient domain surface manipulation. This work demonstrate that linear energy regularization can overcome some of the inherent limitations associated with this technique, can effectively reduce common artifacts associated with this method, eliminating the need for costly non-linear regularization, and expanding the modeling capabilities for harmonic surface deformation.
Kozlov, Y., Esturo, J.M., Seidel, H.-P., and Weinkauf, T. 2014. Regularized Harmonic Surface Deformation. http://arxiv.org/abs/1408.3326.
(arXiv: 1408.3326)
Abstract
Harmonic surface deformation is a well-known geometric modeling method that creates plausible deformations in an interactive manner. However, this method is susceptible to artifacts, in particular close to the deformation handles. These artifacts often correlate with strong gradients of the deformation energy.In this work, we propose a novel formulation of harmonic surface deformation, which incorporates a regularization of the deformation energy. To do so, we build on and extend a recently introduced generic linear regularization approach. It can be expressed as a change of norm for the linear optimization problem, i.e., the regularization is baked into the optimization. This minimizes the implementation complexity and has only a small impact on runtime. Our results show that a moderate use of regularization suppresses many deformation artifacts common to the well-known harmonic surface deformation method, without introducing new artifacts.
Export
BibTeX
@online{kozlov14, TITLE = {Regularized Harmonic Surface Deformation}, AUTHOR = {Kozlov, Yeara and Esturo, Janick Martinez and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1408.3326}, EPRINT = {1408.3326}, EPRINTTYPE = {arXiv}, YEAR = {2014}, ABSTRACT = {Harmonic surface deformation is a well-known geometric modeling method that creates plausible deformations in an interactive manner. However, this method is susceptible to artifacts, in particular close to the deformation handles. These artifacts often correlate with strong gradients of the deformation energy.In this work, we propose a novel formulation of harmonic surface deformation, which incorporates a regularization of the deformation energy. To do so, we build on and extend a recently introduced generic linear regularization approach. It can be expressed as a change of norm for the linear optimization problem, i.e., the regularization is baked into the optimization. This minimizes the implementation complexity and has only a small impact on runtime. Our results show that a moderate use of regularization suppresses many deformation artifacts common to the well-known harmonic surface deformation method, without introducing new artifacts.}, }
Endnote
%0 Report %A Kozlov, Yeara %A Esturo, Janick Martinez %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Regularized Harmonic Surface Deformation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-49F5-A %U http://arxiv.org/abs/1408.3326 %D 2014 %X Harmonic surface deformation is a well-known geometric modeling method that creates plausible deformations in an interactive manner. However, this method is susceptible to artifacts, in particular close to the deformation handles. These artifacts often correlate with strong gradients of the deformation energy.In this work, we propose a novel formulation of harmonic surface deformation, which incorporates a regularization of the deformation energy. To do so, we build on and extend a recently introduced generic linear regularization approach. It can be expressed as a change of norm for the linear optimization problem, i.e., the regularization is baked into the optimization. This minimizes the implementation complexity and has only a small impact on runtime. Our results show that a moderate use of regularization suppresses many deformation artifacts common to the well-known harmonic surface deformation method, without introducing new artifacts. %K Computer Science, Graphics, cs.GR
Kurz, C., Wu, X., Wand, M., Thormählen, T., Kohli, P., and Seidel, H.-P. 2014. Symmetry-aware Template Deformation and Fitting. Computer Graphics Forum 33, 6.
Export
BibTeX
@article{Kurz2014, TITLE = {Symmetry-aware Template Deformation and Fitting}, AUTHOR = {Kurz, Christian and Wu, Xiaokun and Wand, Michael and Thorm{\"a}hlen, Thorsten and Kohli, P. and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12344}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, DATE = {2014}, JOURNAL = {Computer Graphics Forum}, VOLUME = {33}, NUMBER = {6}, PAGES = {205--219}, }
Endnote
%0 Journal Article %A Kurz, Christian %A Wu, Xiaokun %A Wand, Michael %A Thorm&#228;hlen, Thorsten %A Kohli, P. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Symmetry-aware Template Deformation and Fitting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D2B-D %R 10.1111/cgf.12344 %7 2014-03-20 %D 2014 %J Computer Graphics Forum %V 33 %N 6 %& 205 %P 205 - 219 %I Wiley-Blackwell %C Oxford
Kurz, C. 2014. Constrained Camera Motion Estimation and 3D Reconstruction. urn:nbn:de:bsz:291-scidok-59439.
Export
BibTeX
@phdthesis{KurzPhD2014, TITLE = {Constrained Camera Motion Estimation and {3D} Reconstruction}, AUTHOR = {Kurz, Christian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59439}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Kurz, Christian %Y Seidel, Hans-Peter %A referee: Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Constrained Camera Motion Estimation and 3D Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C2-1 %U urn:nbn:de:bsz:291-scidok-59439 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5943/
Levinkov, E. 2014. Scene Segmentation in Adverse Vision Conditions. Pattern Recognition (GCPR 2014), Springer.
Export
BibTeX
@inproceedings{882, TITLE = {Scene Segmentation in Adverse Vision Conditions}, AUTHOR = {Levinkov, Evgeny}, LANGUAGE = {eng}, ISBN = {978-3-319-11751-5}, DOI = {10.1007/978-3-319-11752-2_64}, PUBLISHER = {Springer}, YEAR = {2014}, DATE = {2014-09}, BOOKTITLE = {Pattern Recognition (GCPR 2014)}, EDITOR = {Jiang, Xiaoyi and Hornegger, Joachim and Koch, Reinhard}, PAGES = {750--756}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {8753}, ADDRESS = {M{\"u}nster, Germany}, }
Endnote
%0 Conference Proceedings %A Levinkov, Evgeny %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Scene Segmentation in Adverse Vision Conditions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4CD4-5 %R 10.1007/978-3-319-11752-2_64 %D 2014 %B 36th German Conference on Pattern Recognition %Z date of event: 2014-09-02 - 2014-09-05 %C M&#252;nster, Germany %B Pattern Recognition %E Jiang, Xiaoyi; Hornegger, Joachim; Koch, Reinhard %P 750 - 756 %I Springer %@ 978-3-319-11751-5 %B Lecture Notes in Computer Science %N 8753
Lissermann, R., Huber, J., Schmitz, M., Steimle, J., and Mühlhäusler, M. 2014. Permulin: Mixed-focus Collaboration on Multi-view Tabletops. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.