Publications

2016
Alvarez-Cortez, S., Kunkel, T., and Masia, B. 2016. Practical Low-Cost Recovery of Spectral Power Distributions. Computer Graphics Forum 35, 1.
Export
BibTeX
@article{MasiaCGF2016, TITLE = {Practical Low-Cost Recovery of Spectral Power Distributions}, AUTHOR = {Alvarez-Cortez, Sara and Kunkel, Timo and Masia, Belen}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12717}, PUBLISHER = {Wiley}, ADDRESS = {Chichester}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Graphics Forum}, VOLUME = {35}, NUMBER = {1}, PAGES = {166--178}, }
Endnote
%0 Journal Article %A Alvarez-Cortez, Sara %A Kunkel, Timo %A Masia, Belen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Practical Low-Cost Recovery of Spectral Power Distributions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-1A2F-4 %R 10.1111/cgf.12717 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 1 %& 166 %P 166 - 178 %I Wiley %C Chichester %@ false
Bachynskyi, M. 2016. Biomechanical Models for Human-computer Interaction. urn:nbn:de:bsz:291-scidok-66888.
Abstract
Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.
Export
BibTeX
@phdthesis{Bachyphd16, TITLE = {Biomechanical Models for Human-computer Interaction}, AUTHOR = {Bachynskyi, Myroslav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66888}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.}, }
Endnote
%0 Thesis %A Bachynskyi, Myroslav %Y Steimle, Jürgen %A referee: Schmidt, Albrecht %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Biomechanical Models for Human-computer Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-0FD4-9 %U urn:nbn:de:bsz:291-scidok-66888 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xiv, 206 p. %V phd %9 phd %X Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6688/
Boechat, P., Dokter, M., Kenzel, M., Seidel, H.-P., Schmalstieg, D., and Steinberger, M. Representing and Scheduling Procedural Generation using Operator Graphs. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
(Accepted/in press)
Export
BibTeX
@article{BoaechatSIGGRAPHAsia2016, TITLE = {Representing and Scheduling Procedural Generation using Operator Graphs}, AUTHOR = {Boechat, Pedro and Dokter, Mark and Kenzel, Michael and Seidel, Hans-Peter and Schmalstieg, Dieter and Steinberger, Markus}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, PUBLREMARK = {Accepted}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Boechat, Pedro %A Dokter, Mark %A Kenzel, Michael %A Seidel, Hans-Peter %A Schmalstieg, Dieter %A Steinberger, Markus %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Representing and Scheduling Procedural Generation using Operator Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-98BB-0 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Brandt, C., von Tycowicz, C., and Hildebrandt, K. 2016. Geometric Flows of Curves in Shape Space for Processing Motion of Deformable Objects. Computer Graphics Forum (Proc. EUROGRAPHICS 2016) 35, 2.
Export
BibTeX
@article{Hildebrandt_EG2016, TITLE = {Geometric Flows of Curves in Shape Space for Processing Motion of Deformable Objects}, AUTHOR = {Brandt, Christopher and von Tycowicz, Christoph and Hildebrandt, Klaus}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12832}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {35}, NUMBER = {2}, PAGES = {295--305}, BOOKTITLE = {The European Association for Computer Graphics 37th Annual Conference (EUROGRAPHICS 2016)}, }
Endnote
%0 Journal Article %A Brandt, Christopher %A von Tycowicz, Christoph %A Hildebrandt, Klaus %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Geometric Flows of Curves in Shape Space for Processing Motion of Deformable Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-D22B-8 %R 10.1111/cgf.12832 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 2 %& 295 %P 295 - 305 %I Wiley-Blackwell %C Oxford %@ false %B The European Association for Computer Graphics 37th Annual Conference %O EUROGRAPHICS 2016 Lisbon, Portugal, 9th-13th May 2016 EG 2016
Chen, R. and Gotsman, C. 2016a. Complex Transfinite Barycentric Mappings with Similarity Kernels. Computer Graphics Forum (Proc. SGP 2016) 35, 5.
Export
BibTeX
@article{ChenSGP2016, TITLE = {Complex Transfinite Barycentric Mappings with Similarity Kernels}, AUTHOR = {Chen, Renjie and Gotsman, Craig}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.1296}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Chichester}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. SGP)}, VOLUME = {35}, NUMBER = {5}, PAGES = {51--53}, BOOKTITLE = {Symposium on Geometry Processing 2016 (SGP 2016)}, EDITOR = {Ovsjanikov, Maks and Panozzo, Daniele}, }
Endnote
%0 Journal Article %A Chen, Renjie %A Gotsman, Craig %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Complex Transfinite Barycentric Mappings with Similarity Kernels : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-430B-5 %R 10.1111/cgf.1296 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 5 %& 51 %P 51 - 53 %I Wiley-Blackwell %C Chichester %@ false %B Symposium on Geometry Processing 2016 %O Berlin, Germany ; June 20 - 24, 2016 SGP 2016 Eurographics Symposium on Geometric Processing 2016
Chen, R. and Gotsman, C. 2016b. On Pseudo-harmonic Barycentric Coordinates. Computer Aided Geometric Design 44.
Export
BibTeX
@article{Chen_Gotsman2016, TITLE = {On Pseudo-harmonic Barycentric Coordinates}, AUTHOR = {Chen, Renjie and Gotsman, Craig}, LANGUAGE = {eng}, ISSN = {0167-8396}, DOI = {10.1016/j.cagd.2016.04.005}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Aided Geometric Design}, VOLUME = {44}, PAGES = {15--35}, }
Endnote
%0 Journal Article %A Chen, Renjie %A Gotsman, Craig %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T On Pseudo-harmonic Barycentric Coordinates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-05AD-6 %R 10.1016/j.cagd.2016.04.005 %7 2016 %D 2016 %J Computer Aided Geometric Design %V 44 %& 15 %P 15 - 35 %I Elsevier %C Amsterdam %@ false
Chen, R. and Gotsman, C. 2016c. Generalized As-Similar-As-Possible Warping with Applications in Digital Photography. Computer Graphics Forum (Proc. EUROGRAPHICS 2016) 35, 2.
Export
BibTeX
@article{ChenEG2016, TITLE = {Generalized As-Similar-As-Possible Warping with Applications in Digital Photography}, AUTHOR = {Chen, Renjie and Gotsman, Craig}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12813}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {35}, NUMBER = {2}, PAGES = {81--92}, BOOKTITLE = {The European Association for Computer Graphics 37th Annual Conference (EUROGRAPHICS 2016)}, }
Endnote
%0 Journal Article %A Chen, Renjie %A Gotsman, Craig %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Generalized As-Similar-As-Possible Warping with Applications in Digital Photography : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-8BBD-4 %R 10.1111/cgf.12813 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 2 %& 81 %P 81 - 92 %I Wiley-Blackwell %C Oxford %@ false %B The European Association for Computer Graphics 37th Annual Conference %O EUROGRAPHICS 2016 Lisbon, Portugal, 9th-13th May 2016 EG 2016
Chien, E., Chen, R., and Weber, O. 2016. Bounded Distortion Harmonic Shape Interpolation. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{ChienSIGGRAPH2016, TITLE = {Bounded Distortion Harmonic Shape Interpolation}, AUTHOR = {Chien, Edward and Chen, Renjie and Weber, Ofir}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925926}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {105}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Chien, Edward %A Chen, Renjie %A Weber, Ofir %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Bounded Distortion Harmonic Shape Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0793-A %R 10.1145/2897824.2925926 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 105 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Dąbała, Ł., Ziegler, M., Didyk, P., et al. Efficient Multi-Image Correspondences for Online Light. Computer Graphics Forum (Proc. Pacific Graphics 2016) 35, 7.
(Accepted/in press)
Export
BibTeX
@article{DabalaPG2016, TITLE = {Efficient Multi-Image Correspondences for Online Light}, AUTHOR = {D{\c a}ba{\l}a, {\L}ukasz and Ziegler, Matthias and Didyk, Piotr and Zilly, Frederik and Keinert, Joachim and Myszkowski, Karol and Rokita, Przemyslaw and Ritschel, Tobias}, LANGUAGE = {eng}, ISSN = {1467-8659}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2016}, PUBLREMARK = {Accepted}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {35}, NUMBER = {7}, BOOKTITLE = {The 24th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2016)}, }
Endnote
%0 Journal Article %A Dąbała, Łukasz %A Ziegler, Matthias %A Didyk, Piotr %A Zilly, Frederik %A Keinert, Joachim %A Myszkowski, Karol %A Rokita, Przemyslaw %A Ritschel, Tobias %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Multi-Image Correspondences for Online Light : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82BA-5 %D 2016 %J Computer Graphics Forum %V 35 %N 7 %I Wiley-Blackwell %C Oxford, UK %@ false %B The 24th Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2016 PG 2016
Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., and Theobalt, C. 2016. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration. http://arxiv.org/abs/1604.01093.
(arXiv: 1604.01093)
Abstract
Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.
Export
BibTeX
@online{DaiarXiv1604.01093, TITLE = {BundleFusion: {R}eal-time Globally Consistent {3D} Reconstruction using On-the-fly Surface Re-integration}, AUTHOR = {Dai, Angela and Nie{\ss}ner, Matthias and Zollh{\"o}fer, Michael and Izadi, Shahram and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1604.01093}, EPRINT = {1604.01093}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.}, }
Endnote
%0 Report %A Dai, Angela %A Nießner, Matthias %A Zollhöfer, Michael %A Izadi, Shahram %A Theobalt, Christian %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A9F-2 %U http://arxiv.org/abs/1604.01093 %D 2016 %X Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results. %K Computer Science, Graphics, cs.GR,Computer Science, Computer Vision and Pattern Recognition, cs.CV
DeVito, Z., Mara, M., Zollhöfer, M., et al. 2016. Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging. http://arxiv.org/abs/1604.06525.
(arXiv: 1604.06525)
Abstract
Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt (available under http://optlang.org), in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1-2 orders of magnitude beyond a general-purpose auto-generated solver.
Export
BibTeX
@online{escidoc:2351936, TITLE = {Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging}, AUTHOR = {DeVito, Zachary and Mara, Michael and Zollh{\"o}fer, Michael and Bernstein, Gilbert and Ragan-Kelley, Jonathan and Theobalt, Christian and Hanrahan, Pat and Fisher, Matthew and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1604.06525}, EPRINT = {1604.06525}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt (available under http://optlang.org), in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1-2 orders of magnitude beyond a general-purpose auto-generated solver.}, }
Endnote
%0 Report %A DeVito, Zachary %A Mara, Michael %A Zollhöfer, Michael %A Bernstein, Gilbert %A Ragan-Kelley, Jonathan %A Theobalt, Christian %A Hanrahan, Pat %A Fisher, Matthew %A Nießner, Matthias %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9AA6-0 %U http://arxiv.org/abs/1604.06525 %D 2016 %X Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt (available under http://optlang.org), in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1-2 orders of magnitude beyond a general-purpose auto-generated solver. %K Computer Science, Graphics, cs.GR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Programming Languages, cs.PL
Efrat, N., Didyk, P., Foshey, M., Matusik, W., and Levin, A. 2016. Cinema 3D: Large Scale Automultiscopic Display. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{EfratSIGGRAPH2016, TITLE = {Cinema {3D}: {L}arge Scale Automultiscopic Display}, AUTHOR = {Efrat, Netalee and Didyk, Piotr and Foshey, Mike and Matusik, Wojciech and Levin, Anat}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925921}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {59}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Efrat, Netalee %A Didyk, Piotr %A Foshey, Mike %A Matusik, Wojciech %A Levin, Anat %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Cinema 3D: Large Scale Automultiscopic Display : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0189-5 %R 10.1145/2897824.2925921 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 59 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Garrido, P., Zollhöfer, M., Casas, D., et al. 2016a. Reconstruction of Personalized 3D Face Rigs from Monocular Video. ACM Transactions on Graphics 35, 3.
Export
BibTeX
@article{GarridoTOG2016, TITLE = {Reconstruction of Personalized 3{D} Face Rigs from Monocular Video}, AUTHOR = {Garrido, Pablo and Zollh{\"o}fer, Michael and Casas, Dan and Valgaerts, Levi and Varanasi, Kiran and P{\'e}rez, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2890493}, PUBLISHER = {Association for Computing Machinery}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {35}, NUMBER = {3}, EID = {28}, }
Endnote
%0 Journal Article %A Garrido, Pablo %A Zollhöfer, Michael %A Casas, Dan %A Valgaerts, Levi %A Varanasi, Kiran %A Pérez, Patrick %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Reconstruction of Personalized 3D Face Rigs from Monocular Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F544-D %R 10.1145/2890493 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 3 %Z sequence number: 28 %I Association for Computing Machinery %C New York, NY %@ false
Garrido, P., Valgaerts, L., Rehmsen, O., Thormählen, T., Perez, P., and Theobalt, C. 2016b. Automatic Face Reenactment. http://arxiv.org/abs/1602.02651.
(arXiv: 1602.02651)
Abstract
We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.
Export
BibTeX
@online{GarridoarXiv1602.02651, TITLE = {Automatic Face Reenactment}, AUTHOR = {Garrido, Pablo and Valgaerts, Levi and Rehmsen, Ole and Thorm{\"a}hlen, Thorsten and Perez, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.02651}, EPRINT = {1602.02651}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.}, }
Endnote
%0 Report %A Garrido, Pablo %A Valgaerts, Levi %A Rehmsen, Ole %A Thormählen, Thorsten %A Perez, Patrick %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Automatic Face Reenactment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A53-8 %U http://arxiv.org/abs/1602.02651 %D 2016 %X We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
Groeger, D., Chong Loo, E., and Steimle, J. 2016. HotFlex: Post-print Customization of 3D Prints Using Embedded State Change. CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{Groeger_chi2016, TITLE = {{HotFlex}: {P}ost-print Customization of {3D} Prints Using Embedded State Change}, AUTHOR = {Groeger, Daniel and Chong Loo, Elena and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-3362-7}, DOI = {10.1145/2858036.2858191}, PUBLISHER = {ACM}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {420--432}, ADDRESS = {San Jose, CA, USA}, }
Endnote
%0 Conference Proceedings %A Groeger, Daniel %A Chong Loo, Elena %A Steimle, Jürgen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T HotFlex: Post-print Customization of 3D Prints Using Embedded State Change : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-07BA-3 %R 10.1145/2858036.2858191 %D 2016 %B 34th Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2016-05-07 - 2016-05-12 %C San Jose, CA, USA %B CHI 2016 %P 420 - 432 %I ACM %@ 978-1-4503-3362-7
Gryaditskaya, Y., Masia, B., Didyk, P., Myszkowski, K., and Seidel, H.-P. Gloss Editing in Light Fields. VMV 2016 Vision, Modeling and Visualization, Eurographics Association.
(Accepted/in press)
Export
BibTeX
@inproceedings{jgryadit2016, TITLE = {Gloss Editing in Light Fields}, AUTHOR = {Gryaditskaya, Yulia and Masia, Belen and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, PUBLISHER = {Eurographics Association}, YEAR = {2016}, PUBLREMARK = {Accepted}, BOOKTITLE = {VMV 2016 Vision, Modeling and Visualization}, ADDRESS = {Bayreuth, Germany}, }
Endnote
%0 Conference Proceedings %A Gryaditskaya, Yulia %A Masia, Belen %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Gloss Editing in Light Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82C5-B %D 2016 %B 21st International Symposium on Vision, Modeling and Visualization %Z date of event: 2016-10-10 - 2016-10-12 %C Bayreuth, Germany %B VMV 2016 Vision, Modeling and Visualization %I Eurographics Association
Havran, V., Filip, J., and Myszkowski, K. 2016. Perceptually Motivated BRDF Comparison using Single Image. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2016) 35, 4.
Export
BibTeX
@article{havran2016perceptually, TITLE = {Perceptually Motivated {BRDF} Comparison using Single Image}, AUTHOR = {Havran, Vlastimil and Filip, Jiri and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12944}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {35}, NUMBER = {4}, PAGES = {1--12}, BOOKTITLE = {Eurographics Symposium on Rendering 2016}, EDITOR = {Eisemann, Elmar and Fiume, Eugene}, }
Endnote
%0 Journal Article %A Havran, Vlastimil %A Filip, Jiri %A Myszkowski, Karol %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually Motivated BRDF Comparison using Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82C0-6 %R 10.1111/cgf.12944 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 4 %& 1 %P 1 - 12 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2016 %O Eurographics Symposium on Rendering 2016 EGSR 2016 Dublin, Ireland, 22-24 June 2016
Hoppe, S. and Bulling, A. 2016. End-to-End Eye Movement Detection Using Convolutional Neural Networks. http://arxiv.org/abs/1609.02452.
(arXiv: 1609.02452)
Abstract
Common computational methods for automated eye movement detection - i.e. the task of detecting different types of eye movement in a continuous stream of gaze data - are limited in that they either involve thresholding on hand-crafted signal features, require individual detectors each only detecting a single movement, or require pre-segmented data. We propose a novel approach for eye movement detection that only involves learning a single detector end-to-end, i.e. directly from the continuous gaze data stream and simultaneously for different eye movements without any manual feature crafting or segmentation. Our method is based on convolutional neural networks (CNN) that recently demonstrated superior performance in a variety of tasks in computer vision, signal processing, and machine learning. We further introduce a novel multi-participant dataset that contains scripted and free-viewing sequences of ground-truth annotated saccades, fixations, and smooth pursuits. We show that our CNN-based method outperforms state-of-the-art baselines by a large margin on this challenging dataset, thereby underlining the significant potential of this approach for holistic, robust, and accurate eye movement protocol analysis.
Export
BibTeX
@online{Hoppe1609.02452, TITLE = {End-to-End Eye Movement Detection Using Convolutional Neural Networks}, AUTHOR = {Hoppe, Sabrina and Bulling, Andreas}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1609.02452}, EPRINT = {1609.02452}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Common computational methods for automated eye movement detection -- i.e. the task of detecting different types of eye movement in a continuous stream of gaze data -- are limited in that they either involve thresholding on hand-crafted signal features, require individual detectors each only detecting a single movement, or require pre-segmented data. We propose a novel approach for eye movement detection that only involves learning a single detector end-to-end, i.e. directly from the continuous gaze data stream and simultaneously for different eye movements without any manual feature crafting or segmentation. Our method is based on convolutional neural networks (CNN) that recently demonstrated superior performance in a variety of tasks in computer vision, signal processing, and machine learning. We further introduce a novel multi-participant dataset that contains scripted and free-viewing sequences of ground-truth annotated saccades, fixations, and smooth pursuits. We show that our CNN-based method outperforms state-of-the-art baselines by a large margin on this challenging dataset, thereby underlining the significant potential of this approach for holistic, robust, and accurate eye movement protocol analysis.}, }
Endnote
%0 Report %A Hoppe, Sabrina %A Bulling, Andreas %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T End-to-End Eye Movement Detection Using Convolutional Neural Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-AC6A-B %U http://arxiv.org/abs/1609.02452 %D 2016 %X Common computational methods for automated eye movement detection - i.e. the task of detecting different types of eye movement in a continuous stream of gaze data - are limited in that they either involve thresholding on hand-crafted signal features, require individual detectors each only detecting a single movement, or require pre-segmented data. We propose a novel approach for eye movement detection that only involves learning a single detector end-to-end, i.e. directly from the continuous gaze data stream and simultaneously for different eye movements without any manual feature crafting or segmentation. Our method is based on convolutional neural networks (CNN) that recently demonstrated superior performance in a variety of tasks in computer vision, signal processing, and machine learning. We further introduce a novel multi-participant dataset that contains scripted and free-viewing sequences of ground-truth annotated saccades, fixations, and smooth pursuits. We show that our CNN-based method outperforms state-of-the-art baselines by a large margin on this challenging dataset, thereby underlining the significant potential of this approach for holistic, robust, and accurate eye movement protocol analysis. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., and Stamminger, M. 2016a. VolumeDeform: Real-time Volumetric Non-rigid Reconstruction. http://arxiv.org/abs/1603.08161.
(arXiv: 1603.08161)
Abstract
We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method does not require a pre-defined shape template to start with and builds up the scene model from scratch during the scanning process. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth-based constraint formulation. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera's capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features.
Export
BibTeX
@online{InnmannarXiv1603.08161, TITLE = {{VolumeDeform}: Real-time Volumetric Non-rigid Reconstruction}, AUTHOR = {Innmann, Matthias and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Theobalt, Christian and Stamminger, Marc}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1603.08161}, EPRINT = {1603.08161}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method does not require a pre-defined shape template to start with and builds up the scene model from scratch during the scanning process. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth-based constraint formulation. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera's capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features.}, }
Endnote
%0 Report %A Innmann, Matthias %A Zollhöfer, Michael %A Nießner, Matthias %A Theobalt, Christian %A Stamminger, Marc %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T VolumeDeform: Real-time Volumetric Non-rigid Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A8E-6 %U http://arxiv.org/abs/1603.08161 %D 2016 %X We present a novel approach for the reconstruction of dynamic geometric shapes using a single hand-held consumer-grade RGB-D sensor at real-time rates. Our method does not require a pre-defined shape template to start with and builds up the scene model from scratch during the scanning process. Geometry and motion are parameterized in a unified manner by a volumetric representation that encodes a distance field of the surface geometry as well as the non-rigid space deformation. Motion tracking is based on a set of extracted sparse color features in combination with a dense depth-based constraint formulation. This enables accurate tracking and drastically reduces drift inherent to standard model-to-depth alignment. We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints. The problem is tackled in real-time at the camera's capture rate using a data-parallel flip-flop optimization strategy. Our results demonstrate robust tracking even for fast motion and scenes that lack geometric features. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., and Stamminger, M. 2016b. VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction. Computer Vision -- ECCV 2016, Springer.
Export
BibTeX
@inproceedings{InnmannECCV2016, TITLE = {{VolumeDeform}: {R}eal-Time Volumetric Non-rigid Reconstruction}, AUTHOR = {Innmann, Matthias and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Theobalt, Christian and Stamminger, Marc}, LANGUAGE = {eng}, ISBN = {978-3-319-46483-1}, DOI = {10.1007/978-3-319-46484-8_22}, PUBLISHER = {Springer}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Computer Vision -- ECCV 2016}, EDITOR = {Leibe, Bastian and Matas, Jiri and Sebe, Nicu and Welling, Max}, PAGES = {362--379}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9912}, ADDRESS = {Amsterdam, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Innmann, Matthias %A Zollhöfer, Michael %A Nießner, Matthias %A Theobalt, Christian %A Stamminger, Marc %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A41-0 %R 10.1007/978-3-319-46484-8_22 %D 2016 %B 14th European Conference on Computer Vision %Z date of event: 2016-10-11 - 2016-10-14 %C Amsterdam, The Netherlands %B Computer Vision -- ECCV 2016 %E Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max %P 362 - 379 %I Springer %@ 978-3-319-46483-1 %B Lecture Notes in Computer Science %N 9912
Kellnhofer, P., Didyk, P., Myszkowski, K., Hefeeda, M.M., Seidel, H.-P., and Matusik, W. 2016a. GazeStereo3D: Seamless Disparity Manipulations. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{KellnhoferSIGGRAPH2016, TITLE = {{GazeStereo3D}: {S}eamless Disparity Manipulations}, AUTHOR = {Kellnhofer, Petr and Didyk, Piotr and Myszkowski, Karol and Hefeeda, Mohamed M. and Seidel, Hans-Peter and Matusik, Wojciech}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925866}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {68}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Didyk, Piotr %A Myszkowski, Karol %A Hefeeda, Mohamed M. %A Seidel, Hans-Peter %A Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T GazeStereo3D: Seamless Disparity Manipulations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0190-4 %R 10.1145/2897824.2925866 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 68 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Kellnhofer, P., Didyk, P., Ritschel, T., Masia, B., Myszkowski, K., and Seidel, H.-P. Motion Parallax in Stereo 3D: Model and Applications. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
(Accepted/in press)
Export
BibTeX
@article{Kellnhofer2016SGA, TITLE = {Motion Parallax in Stereo {3D}: {M}odel and Applications}, AUTHOR = {Kellnhofer, Petr and Didyk, Piotr and Ritschel, Tobias and Masia, Belen and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, PUBLREMARK = {Accepted}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Didyk, Piotr %A Ritschel, Tobias %A Masia, Belen %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Motion Parallax in Stereo 3D: Model and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B6-D %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Kellnhofer, P. 2016. Perceptual modeling for stereoscopic 3D. urn:nbn:de:bsz:291-scidok-66813.
Abstract
Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.
Export
BibTeX
@phdthesis{Kellnhoferphd2016, TITLE = {Perceptual modeling for stereoscopic {3D}}, AUTHOR = {Kellnhofer, Petr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66813}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.}, }
Endnote
%0 Thesis %A Kellnhofer, Petr %Y Myszkowski, Karol %A referee: Seidel, Hans-Peter %A referee: Masia, Belen %A referee: Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Perceptual modeling for stereoscopic 3D : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-BBA6-1 %U urn:nbn:de:bsz:291-scidok-66813 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xxiv, 158 p. %V phd %9 phd %X Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort. %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6681/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2016b. Transformation-aware Perceptual Image Metric. Journal of Electronic Imaging 25, 5.
Export
BibTeX
@article{Kellnhofer2016jei, TITLE = {Transformation-aware Perceptual Image Metric}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1017-9909}, DOI = {10.1117/1.JEI.25.5.053014}, PUBLISHER = {SPIE}, ADDRESS = {Bellingham, WA}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Journal of Electronic Imaging}, VOLUME = {25}, NUMBER = {5}, EID = {053014}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Transformation-aware Perceptual Image Metric : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B3-4 %R 10.1117/1.JEI.25.5.053014 %7 2016 %D 2016 %J Journal of Electronic Imaging %V 25 %N 5 %Z sequence number: 053014 %I SPIE %C Bellingham, WA %@ false
Kim, H., Richardt, C., and Theobalt, C. 2016a. Video Depth-From-Defocus. http://arxiv.org/abs/1610.03782.
(arXiv: 1610.03782)
Abstract
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
Export
BibTeX
@online{Kim1610.03782, TITLE = {Video Depth-From-Defocus}, AUTHOR = {Kim, Hyeongwoo and Richardt, Christian and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1610.03782}, EPRINT = {1610.03782}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.}, }
Endnote
%0 Report %A Kim, Hyeongwoo %A Richardt, Christian %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society %T Video Depth-From-Defocus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-B02D-7 %U http://arxiv.org/abs/1610.03782 %D 2016 %X Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2016b. Context-guided Diffusion for Label Propagation on Graphs. http://arxiv.org/abs/1602.06439.
(arXiv: 1602.06439)
Abstract
Existing approaches for diffusion on graphs, e.g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer. Inspired by the success of diffusivity tensors for anisotropic diffusion in image processing, we presents anisotropic diffusion on graphs and the corresponding label propagation algorithm. We develop positive definite diffusivity operators on the vector bundles of Riemannian manifolds, and discretize them to diffusivity operators on graphs. This enables us to easily define new robust diffusivity operators which significantly improve semi-supervised learning performance over existing diffusion algorithms.
Export
BibTeX
@online{KimarXiv1602.06439, TITLE = {Context-guided Diffusion for Label Propagation on Graphs}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.06439}, EPRINT = {1602.06439}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Existing approaches for diffusion on graphs, e.g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer. Inspired by the success of diffusivity tensors for anisotropic diffusion in image processing, we presents anisotropic diffusion on graphs and the corresponding label propagation algorithm. We develop positive definite diffusivity operators on the vector bundles of Riemannian manifolds, and discretize them to diffusivity operators on graphs. This enables us to easily define new robust diffusivity operators which significantly improve semi-supervised learning performance over existing diffusion algorithms.}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Context-guided Diffusion for Label Propagation on Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A84-9 %U http://arxiv.org/abs/1602.06439 %D 2016 %X Existing approaches for diffusion on graphs, e.g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer. Inspired by the success of diffusivity tensors for anisotropic diffusion in image processing, we presents anisotropic diffusion on graphs and the corresponding label propagation algorithm. We develop positive definite diffusivity operators on the vector bundles of Riemannian manifolds, and discretize them to diffusivity operators on graphs. This enables us to easily define new robust diffusivity operators which significantly improve semi-supervised learning performance over existing diffusion algorithms. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2016c. Semi-supervised Learning with Explicit Relationship Regularization. http://arxiv.org/abs/1602.03808.
(arXiv: 1602.03808)
Abstract
In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points. Existing approaches attempt to exploit this relationship information implicitly by enforcing smoothness on function evaluations only. However, what happens if we explicitly regularize the relationships between function evaluations? Inspired by homophily, we regularize based on a smooth relationship function, either defined from the data or with labels. In experiments, we demonstrate that this significantly improves the performance of state-of-the-art algorithms in semi-supervised classification and in spectral data embedding for constrained clustering and dimensionality reduction.
Export
BibTeX
@online{KimarXiv1602.03808, TITLE = {Semi-supervised Learning with Explicit Relationship Regularization}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.03808}, EPRINT = {1602.03808}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points. Existing approaches attempt to exploit this relationship information implicitly by enforcing smoothness on function evaluations only. However, what happens if we explicitly regularize the relationships between function evaluations? Inspired by homophily, we regularize based on a smooth relationship function, either defined from the data or with labels. In experiments, we demonstrate that this significantly improves the performance of state-of-the-art algorithms in semi-supervised classification and in spectral data embedding for constrained clustering and dimensionality reduction.}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Semi-supervised Learning with Explicit Relationship Regularization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A62-6 %U http://arxiv.org/abs/1602.03808 %D 2016 %X In many learning tasks, the structure of the target space of a function holds rich information about the relationships between evaluations of functions on different data points. Existing approaches attempt to exploit this relationship information implicitly by enforcing smoothness on function evaluations only. However, what happens if we explicitly regularize the relationships between function evaluations? Inspired by homophily, we regularize based on a smooth relationship function, either defined from the data or with labels. In experiments, we demonstrate that this significantly improves the performance of state-of-the-art algorithms in semi-supervised classification and in spectral data embedding for constrained clustering and dimensionality reduction. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2016d. Local High-order Regularization on Data Manifolds. http://arxiv.org/abs/1602.03805.
(arXiv: 1602.03805)
Abstract
The common graph Laplacian regularizer is well-established in semi-supervised learning and spectral dimensionality reduction. However, as a first-order regularizer, it can lead to degenerate functions in high-dimensional manifolds. The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems. We introduce a new regularizer which is globally high order and so does not suffer from the degeneracy of the graph Laplacian regularizer, but is also sparse for efficient computation in semi-supervised learning applications. We reduce computational complexity by building a local first-order approximation of the manifold as a surrogate geometry, and construct our high-order regularizer based on local derivative evaluations therein. Experiments on human body shape and pose analysis demonstrate the effectiveness and efficiency of our method.
Export
BibTeX
@online{KimarXiv1602.03805, TITLE = {Local High-order Regularization on Data Manifolds}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.03805}, EPRINT = {1602.03805}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {The common graph Laplacian regularizer is well-established in semi-supervised learning and spectral dimensionality reduction. However, as a first-order regularizer, it can lead to degenerate functions in high-dimensional manifolds. The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems. We introduce a new regularizer which is globally high order and so does not suffer from the degeneracy of the graph Laplacian regularizer, but is also sparse for efficient computation in semi-supervised learning applications. We reduce computational complexity by building a local first-order approximation of the manifold as a surrogate geometry, and construct our high-order regularizer based on local derivative evaluations therein. Experiments on human body shape and pose analysis demonstrate the effectiveness and efficiency of our method.}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Local High-order Regularization on Data Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A5F-0 %U http://arxiv.org/abs/1602.03805 %D 2016 %X The common graph Laplacian regularizer is well-established in semi-supervised learning and spectral dimensionality reduction. However, as a first-order regularizer, it can lead to degenerate functions in high-dimensional manifolds. The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems. We introduce a new regularizer which is globally high order and so does not suffer from the degeneracy of the graph Laplacian regularizer, but is also sparse for efficient computation in semi-supervised learning applications. We reduce computational complexity by building a local first-order approximation of the manifold as a surrogate geometry, and construct our high-order regularizer based on local derivative evaluations therein. Experiments on human body shape and pose analysis demonstrate the effectiveness and efficiency of our method. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Krafka, K., Khosla, A., Kellnhofer, P., et al. Eye Tracking for Everyone. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), IEEE.
(Accepted/in press)
Export
BibTeX
@inproceedings{KrafkaCVPR2016, TITLE = {Eye Tracking for Everyone}, AUTHOR = {Krafka, Kyle and Khosla, Aditya and Kellnhofer, Petr and Kannan, Harini and Bhandarkar, Suchendra and Matusik, Wojciech and Torralba, Antonio}, LANGUAGE = {eng}, PUBLISHER = {IEEE}, YEAR = {2016}, PUBLREMARK = {Accepted}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016)}, PAGES = {2176--2184}, ADDRESS = {Las Vegas, NV, USA}, }
Endnote
%0 Conference Proceedings %A Krafka, Kyle %A Khosla, Aditya %A Kellnhofer, Petr %A Kannan, Harini %A Bhandarkar, Suchendra %A Matusik, Wojciech %A Torralba, Antonio %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Eye Tracking for Everyone : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8245-D %D 2016 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2016-06-26 - 2016-07-01 %C Las Vegas, NV, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 2176 - 2184 %I IEEE %U http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Krafka_Eye_Tracking_for_CVPR_2016_paper.pdf
Lavoué, G., Liu, H., Myszkowski, K., and Lin, W. 2016. Quality Assessment and Perception in Computer Graphics. IEEE Computer Graphics and Applications 36, 4.
Export
BibTeX
@article{Lavoue2016, TITLE = {Quality Assessment and Perception in Computer Graphics}, AUTHOR = {Lavou{\'e}, Guillaume and Liu, Hantao and Myszkowski, Karol and Lin, Weisi}, LANGUAGE = {eng}, ISSN = {0272-1716}, DOI = {10.1109/MCG.2016.72}, PUBLISHER = {IEEE Computer Society :}, ADDRESS = {Los Alamitos, CA}, YEAR = {2016}, DATE = {2016}, JOURNAL = {IEEE Computer Graphics and Applications}, VOLUME = {36}, NUMBER = {4}, PAGES = {21--22}, }
Endnote
%0 Journal Article %A Lavoué, Guillaume %A Liu, Hantao %A Myszkowski, Karol %A Lin, Weisi %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Quality Assessment and Perception in Computer Graphics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8411-2 %R 10.1109/MCG.2016.72 %7 2016-07-29 %D 2016 %J IEEE Computer Graphics and Applications %V 36 %N 4 %& 21 %P 21 - 22 %I IEEE Computer Society : %C Los Alamitos, CA %@ false
Leimkühler, T., Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2016. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion. Graphics Interface 2016, 42nd Graphics Interface Conference, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{LeimkuehlerGI2016, TITLE = {Perceptual real-time {2D}-to-{3D} conversion using cue fusion}, AUTHOR = {Leimk{\"u}hler, Thomas and Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-0-9947868-1-4}, DOI = {10.20380/GI2016.02}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Graphics Interface 2016, 42nd Graphics Interface Conference}, EDITOR = {Popa, Tiberiu and Moffatt, Karyn}, PAGES = {5--12}, ADDRESS = {Victoria, BC, Canada}, }
Endnote
%0 Conference Proceedings %A Leimkühler, Thomas %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-823D-1 %R 10.20380/GI2016.02 %D 2016 %B 42nd Graphics Interface Conference %Z date of event: 2016-06-01 - 2016-06-03 %C Victoria, BC, Canada %B Graphics Interface 2016 %E Popa, Tiberiu; Moffatt, Karyn %P 5 - 12 %I Canadian Information Processing Society %@ 978-0-9947868-1-4
Levinkov, E., Tompkin, J., Bonneel, N., Kirchhoff, S., Andres, B., and Pfister, H. Interactive Multi-Label Video Segmentation. Computer Graphics Forum (Proc. Pacific Graphics 2016) 35, 7.
(Accepted/in press)
Export
BibTeX
@article{LevinkovPG2016, TITLE = {Interactive Multi-Label Video Segmentation}, AUTHOR = {Levinkov, Evgeny and Tompkin, James and Bonneel, N. and Kirchhoff, S. and Andres, Bj{\"o}rn and Pfister, H.}, LANGUAGE = {eng}, ISSN = {1467-8659}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2016}, PUBLREMARK = {Accepted}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {35}, NUMBER = {7}, BOOKTITLE = {The 24th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2016)}, }
Endnote
%0 Journal Article %A Levinkov, Evgeny %A Tompkin, James %A Bonneel, N. %A Kirchhoff, S. %A Andres, Björn %A Pfister, H. %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Interactive Multi-Label Video Segmentation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-AAF5-0 %D 2016 %J Computer Graphics Forum %V 35 %N 7 %I Wiley-Blackwell %C Oxford, UK %@ false %B The 24th Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2016 PG 2016
Meka, A., Zollhöfer, M., Richardt, C., and Theobalt, C. 2016. Live Intrinsic Video. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{MekaSIGGRAPH2016, TITLE = {Live Intrinsic Video}, AUTHOR = {Meka, Abhimitra and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925907}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {109}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Meka, Abhimitra %A Zollhöfer, Michael %A Richardt, Christian %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Live Intrinsic Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-07C8-3 %R 10.1145/2897824.2925907 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 109 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Nalbach, O., Arabadzhiyska, E., Mehta, D., Seidel, H.-P., and Ritschel, T. 2016. Deep Shading: Convolutional Neural Networks for Screen-Space Shading. http://arxiv.org/abs/1603.06078.
(arXiv: 1603.06078)
Abstract
In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images.
Export
BibTeX
@online{NalbacharXiv2016, TITLE = {Deep Shading: Convolutional Neural Networks for Screen-Space Shading}, AUTHOR = {Nalbach, Oliver and Arabadzhiyska, Elena and Mehta, Dushyant and Seidel, Hans-Peter and Ritschel, Tobias}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1603.06078}, EPRINT = {1603.06078}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images.}, }
Endnote
%0 Report %A Nalbach, Oliver %A Arabadzhiyska, Elena %A Mehta, Dushyant %A Seidel, Hans-Peter %A Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Deep Shading: Convolutional Neural Networks for Screen-Space Shading : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0174-4 %U http://arxiv.org/abs/1603.06078 %D 2016 %X In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images. %K Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
Nittala, A.S. and Steimle, J. 2016. Digital Fabrication Pipeline for On-Body Sensors: Design Goals and Challenges. UbiComp’16 Adjunct, ACM.
Export
BibTeX
@inproceedings{NittalaUbiComp2016, TITLE = {Digital Fabrication Pipeline for On-Body Sensors: {D}esign Goals and Challenges}, AUTHOR = {Nittala, Aditya Shekhar and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-4462-3}, DOI = {10.1145/2968219.2979140}, PUBLISHER = {ACM}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {UbiComp'16 Adjunct}, PAGES = {950--953}, ADDRESS = {Heidelberg, Germany}, }
Endnote
%0 Conference Proceedings %A Nittala, Aditya Shekhar %A Steimle, Jürgen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Digital Fabrication Pipeline for On-Body Sensors: Design Goals and Challenges : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-989E-1 %R 10.1145/2968219.2979140 %D 2016 %B ACM International Joint Conference on Pervasive and Ubiquitous Computing %Z date of event: 2016-09-12 - 2016-09-16 %C Heidelberg, Germany %B UbiComp'16 Adjunct %P 950 - 953 %I ACM %@ 978-1-4503-4462-3
Piovarči, M., Levin, D.I.W., Rebello, J., et al. 2016. An Interaction-Aware, Perceptual Model for Non-Linear Elastic Objects. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{PiovarciSIGGRAPH2016, TITLE = {An Interaction-Aware, Perceptual Model for Non-Linear Elastic Objects}, AUTHOR = {Piovar{\v c}i, Michal and Levin, David I. W. and Rebello, Jason and Chen, Desai and {\v D}urikovi{\v c}, Roman and Pfister, Hanspeter and Matusik, Wojciech and Didyk, Piotr}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925885}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {55}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Piovarči, Michal %A Levin, David I. W. %A Rebello, Jason %A Chen, Desai %A Ďurikovič, Roman %A Pfister, Hanspeter %A Matusik, Wojciech %A Didyk, Piotr %+ External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T An Interaction-Aware, Perceptual Model for Non-Linear Elastic Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0187-9 %R 10.1145/2897824.2925885 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 55 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Pishchulin, L. 2016. Articulated People Detection and Pose Estimation in Challenging Real World Environments. urn:nbn:de:bsz:291-scidok-65478.
Export
BibTeX
@phdthesis{PishchulinPhD2016, TITLE = {Articulated People Detection and Pose Estimation in Challenging Real World Environments}, AUTHOR = {Pishchulin, Leonid}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65478}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Pishchulin, Leonid %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Articulated People Detection and Pose Estimation in Challenging Real World Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F000-B %U urn:nbn:de:bsz:291-scidok-65478 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XIII, 248 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6547/
Reinert, B., Ritschel, T., Seidel, H.-P., and Georgiev, I. 2016. Projective Blue-Noise Sampling. Computer Graphics Forum 35, 1.
Export
BibTeX
@article{ReinertCGF2016, TITLE = {Projective Blue-Noise Sampling}, AUTHOR = {Reinert, Bernhard and Ritschel, Tobias and Seidel, Hans-Peter and Georgiev, Iliyan}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12725}, PUBLISHER = {Wiley}, ADDRESS = {Chichester}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Graphics Forum}, VOLUME = {35}, NUMBER = {1}, PAGES = {285--295}, }
Endnote
%0 Journal Article %A Reinert, Bernhard %A Ritschel, Tobias %A Seidel, Hans-Peter %A Georgiev, Iliyan %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Projective Blue-Noise Sampling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-1A31-D %R 10.1111/cgf.12725 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 1 %& 285 %P 285 - 295 %I Wiley %C Chichester %@ false
Rematas, K., Nguyen, C., Ritschel, T., Fritz, M., and Tuytelaars, T. 2016. Novel Views of Objects from a Single Image. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Export
BibTeX
@article{rematas16tpami, TITLE = {Novel Views of Objects from a Single Image}, AUTHOR = {Rematas, Konstantinos and Nguyen, Chuong and Ritschel, Tobias and Fritz, Mario and Tuytelaars, Tinne}, LANGUAGE = {eng}, ISSN = {0162-8828}, DOI = {10.1109/TPAMI.2016.2601093}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2016}, JOURNAL = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, }
Endnote
%0 Journal Article %A Rematas, Konstantinos %A Nguyen, Chuong %A Ritschel, Tobias %A Fritz, Mario %A Tuytelaars, Tinne %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Novel Views of Objects from a Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-058A-1 %R 10.1109/TPAMI.2016.2601093 %7 2016 %D 2016 %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR %J IEEE Transactions on Pattern Analysis and Machine Intelligence %O IEEE Trans. Pattern Anal. Mach. Intell. %I IEEE Computer Society %C Los Alamitos, CA %@ false
Rhodin, H., Richardt, C., Casas, D., et al. EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
(Accepted/in press)
Export
BibTeX
@article{Rhodin2016SGA, TITLE = {{EgoCap}: {E}gocentric Marker-less Motion Capture with Two Fisheye Cameras}, AUTHOR = {Rhodin, Helge and Richardt, Christian and Casas, Dan and Insafutdinov, Eldar and Shafiei, Mohammad and Seidel, Hans-Peter and Schiele, Bernt and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, PUBLREMARK = {Accepted}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Rhodin, Helge %A Richardt, Christian %A Casas, Dan %A Insafutdinov, Eldar %A Shafiei, Mohammad %A Seidel, Hans-Peter %A Schiele, Bernt %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8321-6 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Rhodin, H., Robertini, N., Casas, D., Richardt, C., Seidel, H.-P., and Theobalt, C. 2016a. General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues. http://arxiv.org/abs/1607.08659.
(arXiv: 1607.08659)
Abstract
Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.
Export
BibTeX
@online{Rhodin2016arXiv1607.08659, TITLE = {General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Casas, Dan and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1607.08659}, EPRINT = {1607.08659}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation -- skeleton, volumetric shape, appearance, and optionally a body surface -- and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.}, }
Endnote
%0 Report %A Rhodin, Helge %A Robertini, Nadia %A Casas, Dan %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9883-C %U http://arxiv.org/abs/1607.08659 %D 2016 %X Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and/or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Rhodin, H., Robertini, N., Casas, D., Richardt, C., Seidel, H.-P., and Theobalt, C. 2016b. General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues. Computer Vision -- ECCV 2016, Springer.
Export
BibTeX
@inproceedings{RhodinECCV2016, TITLE = {General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Casas, Dan and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-3-319-46453-4}, DOI = {10.1007/978-3-319-46454-1_31}, PUBLISHER = {Springer}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Computer Vision -- ECCV 2016}, DEBUG = {author: Leibe, Bastian; author: Matas, Jiri; author: Sebe, Nicu; author: Welling, Max}, PAGES = {509--526}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9909}, ADDRESS = {Amsterdam, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Rhodin, Helge %A Robertini, Nadia %A Casas, Dan %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-986D-F %R 10.1007/978-3-319-46454-1_31 %D 2016 %B 14th European Conference on Computer Vision %Z date of event: 2016-10-11 - 2016-10-14 %C Amsterdam, The Netherlands %B Computer Vision -- ECCV 2016 %E Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max %P 509 - 526 %I Springer %@ 978-3-319-46453-4 %B Lecture Notes in Computer Science %N 9909
Rhodin, H., Robertini, N., Richardt, C., Seidel, H.-P., and Theobalt, C. 2016c. A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation. http://arxiv.org/abs/1602.03725.
(arXiv: 1602.03725)
Abstract
Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation.
Export
BibTeX
@online{Rhodin2016arXiv1602.03725, TITLE = {A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.03725}, EPRINT = {1602.03725}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation.}, }
Endnote
%0 Report %A Rhodin, Helge %A Robertini, Nadia %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9875-C %U http://arxiv.org/abs/1602.03725 %D 2016 %X Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Richardt, C., Kim, H., Valgaerts, L., and Theobalt, C. 2016. Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. http://arxiv.org/abs/1609.05115.
(arXiv: 1609.05115)
Abstract
We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.
Export
BibTeX
@online{RichardtarXiv1609.05115, TITLE = {Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras}, AUTHOR = {Richardt, Christian and Kim, Hyeongwoo and Valgaerts, Levi and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1609.05115}, EPRINT = {1609.05115}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.}, }
Endnote
%0 Report %A Richardt, Christian %A Kim, Hyeongwoo %A Valgaerts, Levi %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9AAF-D %U http://arxiv.org/abs/1609.05115 %D 2016 %X We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Robertini, N., de Aguiar, E., Helten, T., and Theobalt, C. 2016. Efficient Multi-view Performance Capture of Fine-Scale Surface Detail. http://arxiv.org/abs/1602.02023.
(arXiv: 1602.02023)
Abstract
We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.
Export
BibTeX
@online{Robertini_arXiv2016, TITLE = {Efficient Multi-view Performance Capture of Fine-Scale Surface Detail}, AUTHOR = {Robertini, Nadia and de Aguiar, Edilson and Helten, Thomas and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.02023}, DOI = {10.1109/3DV.2014.46}, EPRINT = {1602.02023}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.}, }
Endnote
%0 Report %A Robertini, Nadia %A de Aguiar, Edilson %A Helten, Thomas %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Multi-view Performance Capture of Fine-Scale Surface Detail : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-07CD-A %R 10.1109/3DV.2014.46 %U http://arxiv.org/abs/1602.02023 %D 2016 %X We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
Rosenhahn, B. and Andres, B., eds. 2016. Pattern Recognition. Springer.
Export
BibTeX
@proceedings{RosenhahnGCPR2016, TITLE = {Pattern Recognition (GCPR 2016)}, EDITOR = {Rosenhahn, Bodo and Andres, Bjoern}, LANGUAGE = {eng}, DOI = {10.1007/978-3-319-45886-1}, PUBLISHER = {Springer}, YEAR = {2016}, DATE = {2016}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9796}, ADDRESS = {Hannover, Germany}, }
Endnote
%0 Conference Proceedings %E Rosenhahn, Bodo %E Andres, Bjoern %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Pattern Recognition : 38th German Conference, GCPR 2016 ; Hannover, Germany, September 12 - 15, 2016 ; Proceedings %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-AB24-D %R 10.1007/978-3-319-45886-1 %I Springer %D 2016 %B 38th German Conference on Pattern Recognition %Z date of event: 2016-09-12 - 2016-09-15 %D 2016 %C Hannover, Germany %S Lecture Notes in Computer Science %V 9796
Serrano, A., Heide, F., Gutierrez, D., Wetzstein, G., and Masia, B. 2016a. Convolutional Sparse Coding for High Dynamic Range Imaging. Computer Graphics Forum (Proc. EUROGRAPHICS 2016) 35, 2.
Export
BibTeX
@article{CSHDR_EG2016, TITLE = {Convolutional Sparse Coding for High Dynamic Range Imaging}, AUTHOR = {Serrano, Ana and Heide, Felix and Gutierrez, Diego and Wetzstein, Gordon and Masia, Belen}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12819}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {35}, NUMBER = {2}, PAGES = {153--163}, BOOKTITLE = {The European Association of Computer Graphics 37th Annual Conference (EUROGRAPHICS 2016)}, }
Endnote
%0 Journal Article %A Serrano, Ana %A Heide, Felix %A Gutierrez, Diego %A Wetzstein, Gordon %A Masia, Belen %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Convolutional Sparse Coding for High Dynamic Range Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-78E5-3 %R 10.1111/cgf.12819 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 2 %& 153 %P 153 - 163 %I Wiley-Blackwell %C Oxford %@ false %B The European Association of Computer Graphics 37th Annual Conference %O EUROGRAPHICS 2016 Lisbon, Portugal, 9th-13th May 2016 EG 2016
Serrano, A., Gutierrez, D., Myszkowski, K., Seidel, H.-P., and Masia, B. An Intuitive Control Space for Material Appearance. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
(Accepted/in press)
Export
BibTeX
@article{Serrano_MaterialAppearance_2016, TITLE = {An Intuitive Control Space for Material Appearance}, AUTHOR = {Serrano, Ana and Gutierrez, Diego and Myszkowski, Karol and Seidel, Hans-Peter and Masia, Belen}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, PUBLREMARK = {Accepted}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Serrano, Ana %A Gutierrez, Diego %A Myszkowski, Karol %A Seidel, Hans-Peter %A Masia, Belen %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T An Intuitive Control Space for Material Appearance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B8-9 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Serrano, A., Gutierrez, D., Myszkowski, K., Seidel, H.-P., and Masia, B. 2016b. Intuitive Editing of Material Appearance. ACM SIGGRAPH 2016 Posters.
Export
BibTeX
@inproceedings{SerranoSIGGRAPH2016, TITLE = {Intuitive Editing of Material Appearance}, AUTHOR = {Serrano, Ana and Gutierrez, Diego and Myszkowski, Karol and Seidel, Hans-Peter and Masia, Belen}, LANGUAGE = {eng}, ISBN = {978-1-4503-4371-8}, DOI = {10.1145/2945078.2945141}, PUBLISHER = {ACM}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {ACM SIGGRAPH 2016 Posters}, PAGES = {1--2}, EID = {63}, ADDRESS = {Anaheim, CA, USA}, }
Endnote
%0 Generic %A Serrano, Ana %A Gutierrez, Diego %A Myszkowski, Karol %A Seidel, Hans-Peter %A Masia, Belen %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Intuitive Editing of Material Appearance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0170-C %R 10.1145/2945078.2945141 %D 2016 %Z name of event: the 43rd International Conference and Exhibition on Computer Graphics & Interactive Techniques %Z date of event: 2016-07-24 - 2016-07-28 %Z place of event: Anaheim, CA, USA %B ACM SIGGRAPH 2016 Posters %P 1 - 2 %Z sequence number: 63 %@ 978-1-4503-4371-8
Sridhar, S., Müller, F., Zollhöfer, M., Casas, D., Oulasvirta, A., and Theobalt, C. 2016a. Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.
Export
BibTeX
@techreport{Report2016-4-001, TITLE = {Real-time Joint Tracking of a Hand Manipulating an Object from {RGB-D} Input}, AUTHOR = {Sridhar, Srinath and M{\"u}ller, Franziska and Zollh{\"o}fer, Michael and Casas, Dan and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, ABSTRACT = {Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Sridhar, Srinath %A Müller, Franziska %A Zollhöfer, Michael %A Casas, Dan %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-5510-A %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2016 %P 31 p. %X Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness. %B Research Report %@ false
Sridhar, S., Bailly, G., Heydrich, E., Oulasvirta, A., and Theobalt, C. 2016b. FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.
Export
BibTeX
@techreport{Report2016-4-002, TITLE = {{FullHand}: {M}arkerless Skeleton-based Tracking for Free-Hand Interaction}, AUTHOR = {Sridhar, Srinath and Bailly, Gilles and Heydrich, Elias and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2016-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, ABSTRACT = {This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Sridhar, Srinath %A Bailly, Gilles %A Heydrich, Elias %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-7456-7 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2016 %P 11 p. %X This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance. %B Research Report %@ false
Sridhar, S., Rhodin, H., Seidel, H.-P., Oulasvirta, A., and Theobalt, C. 2016c. Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model. http://arxiv.org/abs/1602.03860.
(arXiv: 1602.03860)
Abstract
Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.
Export
BibTeX
@online{Sridhar2016arXiv1602.03860, TITLE = {Real-Time Hand Tracking Using a Sum of Anisotropic {Gaussians} Model}, AUTHOR = {Sridhar, Srinath and Rhodin, Helge and Seidel, Hans-Peter and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.03860}, EPRINT = {1602.03860}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.}, }
Endnote
%0 Report %A Sridhar, Srinath %A Rhodin, Helge %A Seidel, Hans-Peter %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9878-6 %U http://arxiv.org/abs/1602.03860 %D 2016 %X Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Sridhar, S., Müller, F., Oulasvirta, A., and Theobalt, C. 2016d. Fast and Robust Hand Tracking Using Detection-Guided Optimization. http://arxiv.org/abs/1602.04124.
(arXiv: 1602.04124)
Abstract
Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.
Export
BibTeX
@online{SridhararXiv1602.04124, TITLE = {Fast and Robust Hand Tracking Using Detection-Guided Optimization}, AUTHOR = {Sridhar, Srinath and M{\"u}ller, Franziska and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1602.04124}, EPRINT = {1602.04124}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.}, }
Endnote
%0 Report %A Sridhar, Srinath %A Müller, Franziska %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Fast and Robust Hand Tracking Using Detection-Guided Optimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A76-9 %U http://arxiv.org/abs/1602.04124 %D 2016 %X Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Sridhar, S., Müller, F., Zollhöfer, M., Casas, D., Oulasvirta, A., and Theobalt, C. 2016e. Real-Time Joint Tracking of a Hand Manipulating an Object from RGB-D Input. Computer Vision -- ECCV 2016, Springer.
Export
BibTeX
@inproceedings{SridharECCV2016, TITLE = {Real-Time Joint Tracking of a Hand Manipulating an Object from {RGB}-{D} Input}, AUTHOR = {Sridhar, Srinath and M{\"u}ller, Franziska and Zollh{\"o}fer, Michael and Casas, Dan and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-3-319-46474-9}, DOI = {10.1007/978-3-319-46475-6_19}, PUBLISHER = {Springer}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Computer Vision -- ECCV 2016}, EDITOR = {Leibe, Bastian and Matas, Jiri and Sebe, Nicu and Welling, Max}, PAGES = {294--310}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9906}, ADDRESS = {Amsterdam, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A Müller, Franziska %A Zollhöfer, Michael %A Casas, Dan %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-Time Joint Tracking of a Hand Manipulating an Object from RGB-D Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A3D-B %R 10.1007/978-3-319-46475-6_19 %D 2016 %B 14th European Conference on Computer Vision %Z date of event: 2016-10-11 - 2016-10-14 %C Amsterdam, The Netherlands %B Computer Vision -- ECCV 2016 %E Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max %P 294 - 310 %I Springer %@ 978-3-319-46474-9 %B Lecture Notes in Computer Science %N 9906
Steinberger, M., Derler, A., Zayer, R., and Seidel, H.-P. How Naive is Naive SpMV on the GPU? IEEE High Performance Extreme Computing Conference (HPEC 2016).
(Accepted/in press)
Export
BibTeX
@inproceedings{SteinbergerHPEC2016, TITLE = {How naive is naive {SpMV} on the {GPU}?}, AUTHOR = {Steinberger, Markus and Derler, Andreas and Zayer, Rhaleb and Seidel, Hans-Peter}, LANGUAGE = {eng}, YEAR = {2016}, PUBLREMARK = {Accepted}, BOOKTITLE = {IEEE High Performance Extreme Computing Conference (HPEC 2016)}, ADDRESS = {Waltham, MA, USA}, }
Endnote
%0 Conference Proceedings %A Steinberger, Markus %A Derler, Andreas %A Zayer, Rhaleb %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T How Naive is Naive SpMV on the GPU? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-98A5-F %D 2016 %B IEEE High Performance Extreme Computing Conference %Z date of event: 2016-09-13 - 2016-09-15 %C Waltham, MA, USA %B IEEE High Performance Extreme Computing Conference
Templin, K., Didyk, P., Myszkowski, K., and Seidel, H.-P. 2016. Emulating Displays with Continuously Varying Frame Rates. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{TemplinSIGGRAPH2016, TITLE = {Emulating Displays with Continuously Varying Frame Rates}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925879}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {67}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Emulating Displays with Continuously Varying Frame Rates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-018D-E %R 10.1145/2897824.2925879 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 67 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., and Nießner, M. 2016a. Demo of Face2Face: Real-time Face Capture and Reenactment of RGB Videos. ACM SIGGRAPH 2016 Emerging Technologies, ACM.
Export
BibTeX
@inproceedings{ThiesSIGGRAPH2016, TITLE = {Demo of {Face2Face}: {R}eal-time face capture and reenactment of {RGB} videos}, AUTHOR = {Thies, Justus and Zollh{\"o}fer, Michael and Stamminger, Marc and Theobalt, Christian and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, ISBN = {978-1-4503-4372-5}, DOI = {10.1145/2929464.2929475}, PUBLISHER = {ACM}, YEAR = {2016}, BOOKTITLE = {ACM SIGGRAPH 2016 Emerging Technologies}, EID = {5}, ADDRESS = {Anaheim, CA, USA}, }
Endnote
%0 Conference Proceedings %A Thies, Justus %A Zollhöfer, Michael %A Stamminger, Marc %A Theobalt, Christian %A Nießner, Matthias %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Demo of Face2Face: Real-time Face Capture and Reenactment of RGB Videos : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A4C-9 %R 10.1145/2929464.2929475 %D 2016 %B 43rd International Conference and Exhibition on Computer Graphics & Interactive Techniques %Z date of event: 2016-07-24 - 2016-07-28 %C Anaheim, CA, USA %B ACM SIGGRAPH 2016 Emerging Technologies %Z sequence number: 5 %I ACM %@ 978-1-4503-4372-5
Thies, L., Zollhöfer, M., Richardt, C., Theobalt, C., and Greiner, G. 2016b. Real-time Halfway Domain Reconstruction of Motion and Geometry. http://richardt.name/publications/halfway-domain-scene-flow/.
Export
BibTeX
@misc{Thies3DV2016, TITLE = {Real-time Halfway Domain Reconstruction of Motion and Geometry}, AUTHOR = {Thies, Lucas and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt, Christian and Greiner, G{\"u}nther}, LANGUAGE = {eng}, URL = {http://richardt.name/publications/halfway-domain-scene-flow/}, YEAR = {2016}, }
Endnote
%0 Report %A Thies, Lucas %A Zollhöfer, Michael %A Richardt, Christian %A Theobalt, Christian %A Greiner, Günther %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Intel Visual Computing Institute University of Bath Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Real-time Halfway Domain Reconstruction of Motion and Geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-B033-8 %U http://richardt.name/publications/halfway-domain-scene-flow/ %D 2016
Velten, A., Wu, D., Masia, B., et al. 2016. Imaging the Propagation of Light through Scenes at Picosecond Resolution. Communications of the ACM 59, 9.
Export
BibTeX
@article{Velten2016, TITLE = {Imaging the Propagation of Light through Scenes at Picosecond Resolution}, AUTHOR = {Velten, Andreas and Wu, Di and Masia, Belen and Jarabo, Adrian and Barsi, Christopher and Joshi, Chinmaya and Lawson, Everett and Bawendi, Moungi and Gutierrez, Diego and Raskar, Ramesh}, LANGUAGE = {eng}, ISSN = {0001-0782}, DOI = {10.1145/2975165}, PUBLISHER = {Association for Computing Machinery, Inc.}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Communications of the ACM}, VOLUME = {59}, NUMBER = {9}, PAGES = {79--86}, }
Endnote
%0 Journal Article %A Velten, Andreas %A Wu, Di %A Masia, Belen %A Jarabo, Adrian %A Barsi, Christopher %A Joshi, Chinmaya %A Lawson, Everett %A Bawendi, Moungi %A Gutierrez, Diego %A Raskar, Ramesh %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations %T Imaging the Propagation of Light through Scenes at Picosecond Resolution : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-7E47-4 %R 10.1145/2975165 %7 2016 %D 2016 %J Communications of the ACM %V 59 %N 9 %& 79 %P 79 - 86 %I Association for Computing Machinery, Inc. %C New York, NY %@ false
Vogelreiter, P., Hofmann, M., Ebner, C., et al. 2016. Visualization-Guided Evaluation of Simulated Minimally Invasive Cancer Treatment. Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM 2016), Eurographics Association.
Export
BibTeX
@inproceedings{Voglreiter:VES:20161284, TITLE = {Visualization-Guided Evaluation of Simulated Minimally Invasive Cancer Treatment}, AUTHOR = {Vogelreiter, Philip and Hofmann, Michael and Ebner, Christoph and Blanco Sequeiros, Roberto and Portugaller, Horst Rupert and F{\"u}tterer, J{\"u}rgen and Moche, Michael and Steinberger, Markus and Schmalstieg, Dieter}, LANGUAGE = {eng}, DOI = {10.2312/vcbm.20161284}, PUBLISHER = {Eurographics Association}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM 2016)}, EDITOR = {Bruckner, Stefan and Preim, Bernhard and Vilanova, Anna}, PAGES = {163--172}, ADDRESS = {Bergen, Norway}, }
Endnote
%0 Conference Proceedings %A Vogelreiter, Philip %A Hofmann, Michael %A Ebner, Christoph %A Blanco Sequeiros, Roberto %A Portugaller, Horst Rupert %A Fütterer, Jürgen %A Moche, Michael %A Steinberger, Markus %A Schmalstieg, Dieter %+ External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Visualization-Guided Evaluation of Simulated Minimally Invasive Cancer Treatment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-98CD-8 %R 10.2312/vcbm.20161284 %D 2016 %B Eurographics Workshop on Visual Computing for Biology and Medicine %Z date of event: 2016-09-07 - 2016-09-09 %C Bergen, Norway %B Eurographics Workshop on Visual Computing for Biology and Medicine %E Bruckner, Stefan; Preim, Bernhard; Vilanova, Anna %P 163 - 172 %I Eurographics Association
Von Radziewsky, P., Eisemann, E., Seidel, H.-P., and Hildebrandt, K. 2016. Optimized Subspaces for Deformation-based Modeling and Shape Interpolation. Computers and Graphics (Proc. SMI 2016) 58.
Export
BibTeX
@article{Radziewsky2016, TITLE = {Optimized Subspaces for Deformation-based Modeling and Shape Interpolation}, AUTHOR = {von Radziewsky, Philipp and Eisemann, Elmar and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, ISSN = {0097-8493}, DOI = {10.1016/j.cag.2016.05.016}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Computers and Graphics (Proc. SMI)}, VOLUME = {58}, PAGES = {128--138}, BOOKTITLE = {Shape Modeling International 2016 (SMI 2016)}, }
Endnote
%0 Journal Article %A von Radziewsky, Philipp %A Eisemann, Elmar %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Optimized Subspaces for Deformation-based Modeling and Shape Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0144-0 %R 10.1016/j.cag.2016.05.016 %7 2016 %D 2016 %J Computers and Graphics %V 58 %& 128 %P 128 - 138 %I Elsevier %C Amsterdam %@ false %B Shape Modeling International 2016 %O SMI 2016
Wang, Z., Martinez Esturo, J., Seidel, H.-P., and Weinkauf, T. 2016a. Stream Line–Based Pattern Search in Flows. Computer Graphics Forum Early View.
Export
BibTeX
@article{Wang:Esturo:Seidel:Weinkauf2016, TITLE = {Stream Line--Based Pattern Search in Flows}, AUTHOR = {Wang, Zhongjie and Martinez Esturo, Janick and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12990}, PUBLISHER = {Blackwell-Wiley}, ADDRESS = {Oxford}, YEAR = {2016}, JOURNAL = {Computer Graphics Forum}, VOLUME = {Early View}, }
Endnote
%0 Journal Article %A Wang, Zhongjie %A Martinez Esturo, Janick %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Stream Line–Based Pattern Search in Flows : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-4301-A %R 10.1111/cgf.12990 %7 2016 %D 2016 %J Computer Graphics Forum %O Computer Graphics Forum : journal of the European Association for Computer Graphics Comput. Graph. Forum %V Early View %I Blackwell-Wiley %C Oxford %@ false
Wang, Z., Seidel, H.-P., and Weinkauf, T. 2016b. Multi-field Pattern Matching Based on Sparse Feature Sampling. IEEE Transactions on Visualization and Computer Graphics 22, 1.
Export
BibTeX
@article{Wang2015, TITLE = {Multi-field Pattern Matching Based on Sparse Feature Sampling}, AUTHOR = {Wang, Zhongjie and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2015.2467292}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {New York, NY}, YEAR = {2016}, DATE = {2016}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics}, VOLUME = {22}, NUMBER = {1}, PAGES = {807--816}, }
Endnote
%0 Journal Article %A Wang, Zhongjie %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Multi-field Pattern Matching Based on Sparse Feature Sampling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-1A76-6 %R 10.1109/TVCG.2015.2467292 %7 2015 %D 2016 %J IEEE Transactions on Visualization and Computer Graphics %V 22 %N 1 %& 807 %P 807 - 816 %I IEEE Computer Society %C New York, NY %@ false
2015
Ao, H., Zhang, Y., Jarabo, A., et al. 2015. Light Field Editing Based on Reparameterization. Advances in Multimedia Information Processing -- PCM 2015, Springer.
Export
BibTeX
@inproceedings{AoPCM2015, TITLE = {Light Field Editing Based on Reparameterization}, AUTHOR = {Ao, Hongbo and Zhang, Yongbing and Jarabo, Adrian and Masia, Belen and Liu, Yebin and Gutierrez, Diego and Dai, Qionghai}, LANGUAGE = {eng}, ISBN = {978-3-319-24074-9}, DOI = {10.1007/978-3-319-24075-6_58}, PUBLISHER = {Springer}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Advances in Multimedia Information Processing -- PCM 2015}, EDITOR = {Ho, Yo-Sung and Sang, Jitao and Ro, Yong Man and Kim, Junmo and Wu, Fei}, PAGES = {601--610}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9314}, ADDRESS = {Gwangju, South Korea}, }
Endnote
%0 Conference Proceedings %A Ao, Hongbo %A Zhang, Yongbing %A Jarabo, Adrian %A Masia, Belen %A Liu, Yebin %A Gutierrez, Diego %A Dai, Qionghai %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Light Field Editing Based on Reparameterization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-42DD-0 %R 10.1007/978-3-319-24075-6_58 %D 2015 %B 16th Pacific-Rim Conference on Multimedia %Z date of event: 2015-09-16 - 2015-09-18 %C Gwangju, South Korea %B Advances in Multimedia Information Processing -- PCM 2015 %E Ho, Yo-Sung; Sang, Jitao; Ro, Yong Man; Kim, Junmo; Wu, Fei %P 601 - 610 %I Springer %@ 978-3-319-24074-9 %B Lecture Notes in Computer Science %N 9314
Arpa, S., Ritschel, T., Myszkowski, K., Çapin, T., and Seidel, H.-P. 2015. Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image. Computer Graphics Forum 34, 1.
Export
BibTeX
@article{arpa2014purkinje, TITLE = {Purkinje Images: {Conveying} Different Content for Different Luminance Adaptations in a Single Image}, AUTHOR = {Arpa, Sami and Ritschel, Tobias and Myszkowski, Karol and {\c C}apin, Tolga and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12463}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum}, VOLUME = {34}, NUMBER = {1}, PAGES = {116--126}, }
Endnote
%0 Journal Article %A Arpa, Sami %A Ritschel, Tobias %A Myszkowski, Karol %A Çapin, Tolga %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D0B-6 %R 10.1111/cgf.12463 %7 2014-10-18 %D 2015 %J Computer Graphics Forum %V 34 %N 1 %& 116 %P 116 - 126 %I Wiley-Blackwell %C Oxford
Bachynskyi, M. Physical ergonomics of tablet interaction while sitting. Proceedings of the 39th Annual Meeting of the American Society of Biomechanics.
(Accepted/in press)
Export
BibTeX
@inproceedings{Bachynskyi2015, TITLE = {Physical ergonomics of tablet interaction while sitting}, AUTHOR = {Bachynskyi, Myroslav}, LANGUAGE = {eng}, YEAR = {2015}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the 39th Annual Meeting of the American Society of Biomechanics}, ADDRESS = {Columbus, Ohio, USA}, }
Endnote
%0 Conference Proceedings %A Bachynskyi, Myroslav %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Physical ergonomics of tablet interaction while sitting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-17CB-2 %D 2015 %8 12.05.2015 %B 39th Annual Meeting of the American Society of Biomechanics %Z date of event: 2015-08-05 - 2015-08-08 %C Columbus, Ohio, USA %B Proceedings of the 39th Annual Meeting of the American Society of Biomechanics
Bachynskyi, M., Palmas, G., Oulasvirta, A., and Weinkauf, T. 2015a. Informing the Design of Novel Input Methods with Muscle Coactivation Clustering. ACM Transactions on Computer-Human Interaction 21, 6.
Export
BibTeX
@article{bachynskyi2014informing, TITLE = {Informing the Design of Novel Input Methods with Muscle Coactivation Clustering}, AUTHOR = {Bachynskyi, Myroslav and Palmas, Gregorio and Oulasvirta, Antti and Weinkauf, Tino}, LANGUAGE = {eng}, DOI = {10.1145/2687921}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Computer-Human Interaction}, VOLUME = {21}, NUMBER = {6}, PAGES = {1--25}, EID = {30}, }
Endnote
%0 Journal Article %A Bachynskyi, Myroslav %A Palmas, Gregorio %A Oulasvirta, Antti %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Informing the Design of Novel Input Methods with Muscle Coactivation Clustering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D58-8 %R 10.1145/2687921 %7 2015 %D 2015 %J ACM Transactions on Computer-Human Interaction %O TOCHI %V 21 %N 6 %& 1 %P 1 - 25 %Z sequence number: 30 %I ACM %C New York, NY
Bachynskyi, M., Palmas, G., Oulasvirta, A., Steimle, J., and Weinkauf, T. 2015b. Performance and Ergonomics of Touch Surfaces: A Comparative Study Using Biomechanical Simulation. CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{BachynskyiCHI2015, TITLE = {Performance and Ergonomics of Touch Surfaces: {A} Comparative Study Using Biomechanical Simulation}, AUTHOR = {Bachynskyi, Myroslav and Palmas, Gregorio and Oulasvirta, Antti and Steimle, J{\"u}rgen and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-1-4503-3145-6}, DOI = {10.1145/2702123.2702607}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems}, PAGES = {1817--1826}, ADDRESS = {Seoul, Korea}, }
Endnote
%0 Conference Proceedings %A Bachynskyi, Myroslav %A Palmas, Gregorio %A Oulasvirta, Antti %A Steimle, Jürgen %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Performance and Ergonomics of Touch Surfaces: A Comparative Study Using Biomechanical Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-6658-5 %R 10.1145/2702123.2702607 %D 2015 %B 33rd ACM SIGCHI Conference on Human Factors in Computing Systems %Z date of event: 2015-04-18 - 2015-04-23 %C Seoul, Korea %B CHI 2015 %P 1817 - 1826 %I ACM %@ 978-1-4503-3145-6
Bientinesi, P., Herrero, J.R., Quintana-Ortí, E.S., and Strzodka, R. 2015. Parallel Computing on Graphics Processing Units and Heterogeneous Platforms. Concurrency and Computing: Practice and Experience 27, 6.
Export
BibTeX
@article{escidoc:2148853, TITLE = {Parallel Computing on Graphics Processing Units and Heterogeneous Platforms}, AUTHOR = {Bientinesi, Paolo and Herrero, Jos{\'e} R. and Quintana-Ort{\'i}, Enrique S. and Strzodka, Robert}, LANGUAGE = {eng}, ISSN = {1532-0626}, DOI = {10.1002/cpe.3411}, PUBLISHER = {Wiley}, ADDRESS = {Chichester, UK}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Concurrency and Computing: Practice and Experience}, VOLUME = {27}, NUMBER = {6}, PAGES = {1525--1527}, }
Endnote
%0 Journal Article %A Bientinesi, Paolo %A Herrero, José R. %A Quintana-Ortí, Enrique S. %A Strzodka, Robert %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Parallel Computing on Graphics Processing Units and Heterogeneous Platforms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-BF47-A %R 10.1002/cpe.3411 %7 2014-10-07 %D 2015 %J Concurrency and Computing: Practice and Experience %V 27 %N 6 %& 1525 %P 1525 - 1527 %I Wiley %C Chichester, UK %@ false
Brandt, C., Seidel, H.-P., and Hildebrandt, K. 2015. Optimal Spline Approximation via ℓ₀-Minimization. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{Brandt2015, TITLE = {Optimal Spline Approximation via $\ell_0$-Minimization}, AUTHOR = {Brandt, Christopher and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12589}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {617--626}, BOOKTITLE = {The 36th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Brandt, Christopher %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Optimal Spline Approximation via ℓ₀-Minimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D67-5 %R 10.1111/cgf.12589 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 617 %P 617 - 626 %I Wiley-Blackwell %C Oxford %B The 36th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 4th - 8th May 2015, Kongresshaus in Zürich, Switzerland
Casas, D., Richardt, C., Collomosse, J., Theobalt, C., and Hilton, A. 2015. 4D Model Flow: Precomputed Appearance Alignment for Real-time 4D Video Interpolation. Computer Graphics Forum (Proc. Pacific Graphics 2015) 34, 7.
Export
BibTeX
@article{CasasPG2015, TITLE = {{4D} Model Flow: {P}recomputed Appearance Alignment for Real-time {4D} Video Interpolation}, AUTHOR = {Casas, Dan and Richardt, Christian and Collomosse, John and Theobalt, Christian and Hilton, Adrian}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.12756}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {34}, NUMBER = {7}, PAGES = {173--182}, BOOKTITLE = {The 23rd Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2015)}, EDITOR = {Mitra, N. J. and Stam, J. and Xu, K.}, }
Endnote
%0 Journal Article %A Casas, Dan %A Richardt, Christian %A Collomosse, John %A Theobalt, Christian %A Hilton, Adrian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T 4D Model Flow: Precomputed Appearance Alignment for Real-time 4D Video Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5347-8 %R 10.1111/cgf.12756 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 7 %& 173 %P 173 - 182 %I Wiley-Blackwell %C Oxford, UK %@ false %B The 23rd Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2015 PG 2015 Tsinghua University, Beijing, October 7 – 9, 2015
Castaldo, F., Zamir, A., Angst, R., Palmieri, F., and Savarese, S. 2015. Semantic Cross-View Matching. IEEE International Conference on Computer Vision Workshops (ICCVW 2015), IEEE Computer Society.
Export
BibTeX
@inproceedings{AngstICCV_W2015, TITLE = {Semantic Cross-View Matching}, AUTHOR = {Castaldo, Francesco and Zamir, Amir and Angst, Roland and Palmieri, Francesco and Savarese, Silvio}, LANGUAGE = {eng}, ISBN = {978-1-4673-9711-7}, DOI = {10.1109/ICCVW.2015.137}, PUBLISHER = {IEEE Computer Society}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE International Conference on Computer Vision Workshops (ICCVW 2015)}, PAGES = {1044--1052}, ADDRESS = {Santiago, Chile}, }
Endnote
%0 Conference Proceedings %A Castaldo, Francesco %A Zamir, Amir %A Angst, Roland %A Palmieri, Francesco %A Savarese, Silvio %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Semantic Cross-View Matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-45A4-0 %R 10.1109/ICCVW.2015.137 %D 2015 %B IEEE International Conference on Computer Vision Workshops %Z date of event: 2015-12-11 - 2015-12-18 %C Santiago, Chile %B IEEE International Conference on Computer Vision Workshops %P 1044 - 1052 %I IEEE Computer Society %@ 978-1-4673-9711-7 %U http://www.cv-foundation.org/openaccess/content_iccv_2015_workshops/w28/papers/Castaldo_Semantic_Cross-View_Matching_ICCV_2015_paper.pdf
Elek, O. 2015. Efficient Methods for Physically-based Rendering of Participating Media. urn:nbn:de:bsz:291-scidok-65357.
Export
BibTeX
@phdthesis{ElekPhD2016, TITLE = {Efficient Methods for Physically-based Rendering of Participating Media}, AUTHOR = {Elek, Oskar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65357}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Elek, Oskar %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %A referee: Dachsbacher, Karsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Efficient Methods for Physically-based Rendering of Participating Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F94D-E %U urn:nbn:de:bsz:291-scidok-65357 %I Universität des Saarlandes %C Saarbrücken %D 2015 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6535/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Elhayek, A., de Aguiar, E., Tompson, J., et al. 2015a. Efficient ConvNet-based Marker-less Motion Capture in General Scenes with a Low Number of Cameras. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), IEEE Computer Society.
Export
BibTeX
@inproceedings{Elhayek15cvpr, TITLE = {Efficient {ConvNet}-based Marker-less Motion Capture in General Scenes with a Low Number of Cameras}, AUTHOR = {Elhayek, Ahmed and de Aguiar, Edilson and Tompson, Jonathan and Jain, Arjun and Pishchulin, Leonid and Andriluka, Mykhaylo and Bregler, Chri and Schiele, Bernt and Theobalt, Christian}, LANGUAGE = {eng}, DOI = {10.1109/CVPR.2015.7299005}, PUBLISHER = {IEEE Computer Society}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)}, PAGES = {3810--3818}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Elhayek, Ahmed %A de Aguiar, Edilson %A Tompson, Jonathan %A Jain, Arjun %A Pishchulin, Leonid %A Andriluka, Mykhaylo %A Bregler, Chri %A Schiele, Bernt %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient ConvNet-based Marker-less Motion Capture in General Scenes with a Low Number of Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-01B7-F %R 10.1109/CVPR.2015.7299005 %D 2015 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2015-06-08 - 2015-06-10 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 3810 - 3818 %I IEEE Computer Society
Elhayek, A. 2015. Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups. .
Export
BibTeX
@phdthesis{ElhayekPhd15, TITLE = {Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups}, AUTHOR = {Elhayek, Ahmed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Elhayek, Ahmed %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A0-4 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P XIV, 124 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6325/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Elhayek, A., Stoll, C., Kim, K.J., and Theobalt, C. 2015b. Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters. Computer Graphics Forum 34, 6.
Export
BibTeX
@article{CGF:CGF12519, TITLE = {Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters}, AUTHOR = {Elhayek, Ahmed and Stoll, Carsten and Kim, Kil Joong and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12519}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum}, VOLUME = {34}, NUMBER = {6}, PAGES = {86--98}, }
Endnote
%0 Journal Article %A Elhayek, Ahmed %A Stoll, Carsten %A Kim, Kil Joong %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF1A-0 %R 10.1111/cgf.12519 %7 2014-12-11 %D 2015 %J Computer Graphics Forum %V 34 %N 6 %& 86 %P 86 - 98 %I Wiley-Blackwell %C Oxford %@ false
Garrido, P., Valgaerts, L., Sarmadi, H., et al. 2015. VDub: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{Garrido15, TITLE = {{VDub}: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track}, AUTHOR = {Garrido, Pablo and Valgaerts, Levi and Sarmadi, Hamid and Steiner, Ingmar and Varanasi, Kiran and Perez, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12552}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {193--204}, BOOKTITLE = {The 38th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Garrido, Pablo %A Valgaerts, Levi %A Sarmadi, Hamid %A Steiner, Ingmar %A Varanasi, Kiran %A Perez, Patrick %A Theobalt, Christian %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T VDub: Modifying Face Video of Actors for Plausible Visual Alignment to a Dubbed Audio Track : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF2B-8 %R 10.1111/cgf.12552 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 193 %P 193 - 204 %I Wiley-Blackwell %C Oxford %@ false %B The 38th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 4th – 8th May 2015 , Kongresshaus in Zürich, Switzerland
Georgiev, I. 2015. Path Sampling Techniques for Efficient Light Transport Simulation. .
Export
BibTeX
@phdthesis{Georgievphd15, TITLE = {Path Sampling Techniques for Efficient Light Transport Simulation}, AUTHOR = {Georgiev, Iliyan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Georgiev, Iliyan %Y Slussalek, Philipp %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Path Sampling Techniques for Efficient Light Transport Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E59-9 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 162 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6152/
Granados, M., Aydin, T.O., Tena, J.R., Lalonde, J.-F., and Theobalt, C. 2015. HDR Image Noise Estimation for Denoising Tone Mapped Images. Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015), ACM.
Abstract
<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>
Export
BibTeX
@inproceedings{GranadosCVMP2015, TITLE = {{HDR} Image Noise Estimation for Denoising Tone Mapped Images}, AUTHOR = {Granados, Miguel and Aydin, Tunc Ozan and Tena, J. Rafael and Lalonde, Jean-Fran{\c c}ois and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4503-3560-7}, DOI = {10.1145/2824840.2824847}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2015}, ABSTRACT = {<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>}, BOOKTITLE = {Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015)}, EDITOR = {Collomosse, John and Cosker, Darren}, EID = {7}, ADDRESS = {London, UK}, }
Endnote
%0 Conference Proceedings %A Granados, Miguel %A Aydin, Tunc Ozan %A Tena, J. Rafael %A Lalonde, Jean-Fran&#231;ois %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T HDR Image Noise Estimation for Denoising Tone Mapped Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5335-0 %R 10.1145/2824840.2824847 %D 2015 %B 12th European Conference on Visual Media Production %Z date of event: 2014-11-24 - 2014-11-25 %C London, UK %X <p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p> %B Proceedings of the 12th European Conference on Visual Media Production %E Collomosse, John; Cosker, Darren %Z sequence number: 7 %I ACM %@ 978-1-4503-3560-7
Grochulla, M.P. and Thormählen, T. 2015. Combining Photometric Normals and Multi-View Stereo for 3D Reconstruction. Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015), ACM.
Abstract
<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>
Export
BibTeX
@inproceedings{GrochullaCVMP2015, TITLE = {Combining Photometric Normals and Multi-View Stereo for {3D} Reconstruction}, AUTHOR = {Grochulla, Martin Peter and Thorm{\"a}hlen, Thorsten}, LANGUAGE = {eng}, ISBN = {978-1-4503-3560-7}, DOI = {10.1145/2824840.2824846}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2015}, ABSTRACT = {<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>}, BOOKTITLE = {Proceedings of the 12th European Conference on Visual Media Production (CVMP 2015)}, EDITOR = {Collomosse, John and Cosker, Darren}, EID = {7}, ADDRESS = {London, UK}, }
Endnote
%0 Conference Proceedings %A Grochulla, Martin Peter %A Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Combining Photometric Normals and Multi-View Stereo for 3D Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-4DA8-4 %R 10.1145/2824840.2824846 %D 2015 %B 12th European Conference on Visual Media Production %Z date of event: 2014-11-24 - 2014-11-25 %C London, UK %X <p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p> %B Proceedings of the 12th European Conference on Visual Media Production %E Collomosse, John; Cosker, Darren %Z sequence number: 7 %I ACM %@ 978-1-4503-3560-7
Gryaditskaya, Y., Pouli, T., Reinhard, E., Myszkowski, K., and Seidel, H.-P. 2015. Motion Aware Exposure Bracketing for HDR Video. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2015) 34, 4.
Export
BibTeX
@article{Gryaditskaya2015, TITLE = {Motion Aware Exposure Bracketing for {HDR} Video}, AUTHOR = {Gryaditskaya, Yulia and Pouli, Tania and Reinhard, Erik and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12684}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {34}, NUMBER = {4}, PAGES = {119--130}, BOOKTITLE = {Eurographics Symposium on Rendering 2015}, EDITOR = {Lehtinen, Jaakko and Nowrouzezahra, Derek}, }
Endnote
%0 Journal Article %A Gryaditskaya, Yulia %A Pouli, Tania %A Reinhard, Erik %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Motion Aware Exposure Bracketing for HDR Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-15D2-B %R 10.1111/cgf.12684 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 4 %& 119 %P 119 - 130 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2015 %O Eurographics Symposium on Rendering 2015 EGSR 2015 Darmstadt, Germany, June 24th - 26th, 2015
Herzog, R., Mewes, D., Wand, M., Guibas, L., and Seidel, H.-P. 2015. LeSSS: Learned Shared Semantic Spaces for Relating Multi-modal Representations of 3D Shapes. Computer Graphics Forum (Proc. Eurographics Symposium on Geometric Processing 2015) 34, 5.
Export
BibTeX
@article{HerzogSGP2015, TITLE = {{LeSSS}: {L}earned {S}hared {S}emantic {S}paces for Relating Multi-Modal Representations of {3D} Shapes}, AUTHOR = {Herzog, Robert and Mewes, Daniel and Wand, Michael and Guibas, Leonidas and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12703}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Chichester}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Geometric Processing)}, VOLUME = {34}, NUMBER = {5}, PAGES = {141--151}, BOOKTITLE = {Symposium on Geometry Processing 2015 (Eurographics Symposium on Geometric Processing 2015)}, EDITOR = {Ben-Chen, Mirela and Liu, Ligang}, }
Endnote
%0 Journal Article %A Herzog, Robert %A Mewes, Daniel %A Wand, Michael %A Guibas, Leonidas %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T LeSSS: Learned Shared Semantic Spaces for Relating Multi-modal Representations of 3D Shapes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-8E9A-6 %R 10.1111/cgf.12703 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 5 %& 141 %P 141 - 151 %I Wiley-Blackwell %C Chichester %@ false %B Symposium on Geometry Processing 2015 %O Graz, Austria, July 6 - 8, 2015 SGP 2015 Eurographics Symposium on Geometric Processing 2015
Hulea, R.F. 2015. Compressed Vibration Modes for Deformable Objects. .
Export
BibTeX
@mastersthesis{HuleaMaster2015, TITLE = {Compressed Vibration Modes for Deformable Objects}, AUTHOR = {Hulea, Razvan Florin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Hulea, Razvan Florin %Y Hildebrandt, Klaus %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Compressed Vibration Modes for Deformable Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2EAF-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 47 p. %V master %9 master
Jain, A., Chen, C., Thormählen, T., Metaxas, D., and Seidel, H.-P. 2015. Multi-layer Stencil Creation from Images. Computers and Graphics 48.
Export
BibTeX
@article{JainMulti-layer2015, TITLE = {Multi-layer Stencil Creation from Images}, AUTHOR = {Jain, Arjun and Chen, Chao and Thorm{\"a}hlen, Thorsten and Metaxas, Dimitris and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0097-8493}, DOI = {10.1016/j.cag.2015.02.003}, PUBLISHER = {Pergamon}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computers and Graphics}, VOLUME = {48}, PAGES = {11--22}, }
Endnote
%0 Journal Article %A Jain, Arjun %A Chen, Chao %A Thorm&#228;hlen, Thorsten %A Metaxas, Dimitris %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Multi-layer Stencil Creation from Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9C34-A %R 10.1016/j.cag.2015.02.003 %7 2015-02-26 %D 2015 %J Computers and Graphics %V 48 %& 11 %P 11 - 22 %I Pergamon %C New York, NY %@ false
Kellnhofer, P., Ritschel, T., Myszkowski, K., Eisemann, E., and Seidel, H.-P. 2015a. Modeling Luminance Perception at Absolute Threshold. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2015) 34, 4.
Export
BibTeX
@article{Kellnhofer2015a, TITLE = {Modeling Luminance Perception at Absolute Threshold}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Eisemann, Elmar and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12687}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {34}, NUMBER = {4}, PAGES = {155--164}, BOOKTITLE = {Eurographics Symposium on Rendering 2014}, EDITOR = {Lehtinen, Jaakko and Nowrouzezahra, Derek}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Eisemann, Elmar %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Modeling Luminance Perception at Absolute Threshold : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-8E8D-4 %R 10.1111/cgf.12687 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 4 %& 155 %P 155 - 164 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2014 %O Eurographics Symposium on Rendering 2015 EGSR 2015 Darmstadt, Germany, June 24th - 26th, 2015
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2015b. A Transformation-aware Perceptual Image Metric. Human Vision and Electronic Imaging XX (HVEI 2015), SPIE/IS&T.
Abstract
Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations.
Export
BibTeX
@inproceedings{Kellnhofer2015, TITLE = {A Transformation-aware Perceptual Image Metric}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {9781628414844}, DOI = {10.1117/12.2076754}, PUBLISHER = {SPIE/IS\&T}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, ABSTRACT = {Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations.}, BOOKTITLE = {Human Vision and Electronic Imaging XX (HVEI 2015)}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and de Ridder, Huib}, EID = {939408}, SERIES = {Proceedings of SPIE}, VOLUME = {9394}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Transformation-aware Perceptual Image Metric : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-544A-4 %R 10.1117/12.2076754 %D 2015 %B Human Vision and Electronic Imaging XX %Z date of event: 2015-02-08 - 2015-02-12 %C San Francisco, CA, USA %X Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations. %B Human Vision and Electronic Imaging XX %E Rogowitz, Bernice E.; Pappas, Thrasyvoulos N.; de Ridder, Huib %Z sequence number: 939408 %I SPIE/IS&T %@ 9781628414844 %B Proceedings of SPIE %N 9394
Kellnhofer, P., Leimkühler, T., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2015c. What Makes 2D-to-3D Stereo Conversion Perceptually Plausible? Proceedings SAP 2015, ACM.
Export
BibTeX
@inproceedings{Kellnhofer2015SAP, TITLE = {What Makes {2D}-to-{3D} Stereo Conversion Perceptually Plausible?}, AUTHOR = {Kellnhofer, Petr and Leimk{\"u}hler, Thomas and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, ISBN = {978-1-4503-3812-7}, DOI = {10.1145/2804408.2804409}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Proceedings SAP 2015}, PAGES = {59--66}, ADDRESS = {T{\"u}bingen, Germany}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Leimk&#252;hler, Thomas %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T What Makes 2D-to-3D Stereo Conversion Perceptually Plausible? : %U http://hdl.handle.net/11858/00-001M-0000-0029-2460-7 %R 10.1145/2804408.2804409 %D 2015 %B ACM SIGGRAPH Symposium on Applied Perception %Z date of event: 2015-09-13 - 2015-09-14 %C T&#252;bingen, Germany %B Proceedings SAP 2015 %P 59 - 66 %I ACM %@ 978-1-4503-3812-7 %U http://resources.mpi-inf.mpg.de/StereoCueFusion/WhatMakes3D/
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2015a. Semi-supervised Learning with Explicit Relationship Regularization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), IEEE Computer Society.
Export
BibTeX
@inproceedings{KimCVPR2015, TITLE = {Semi-supervised Learning with Explicit Relationship Regularization}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, DOI = {10.1109/CVPR.2015.7298831}, PUBLISHER = {IEEE Computer Society}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)}, PAGES = {2188--2196}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Semi-supervised Learning with Explicit Relationship Regularization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-9A6D-0 %R 10.1109/CVPR.2015.7298831 %D 2015 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2015-06-07 - 2015-06-12 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 2188 - 2196 %I IEEE Computer Society
Kim, K.I., Tompkin, J., Pfister, H., and Theobalt, C. 2015b. Context-guided Diffusion for Label Propagation on Graphs. ICCV 2015, IEEE International Conference on Computer Vision, IEEE.
Export
BibTeX
@inproceedings{KimICCV2015, TITLE = {Context-guided Diffusion for Label Propagation on Graphs}, AUTHOR = {Kim, Kwang In and Tompkin, James and Pfister, Hanspeter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4673-8390-5}, DOI = {10.1109/ICCV.2015.318}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {ICCV 2015, IEEE International Conference on Computer Vision}, PAGES = {2776--2764}, ADDRESS = {Santiago, Chile}, }
Endnote
%0 Conference Proceedings %A Kim, Kwang In %A Tompkin, James %A Pfister, Hanspeter %A Theobalt, Christian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Context-guided Diffusion for Label Propagation on Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-52EF-9 %R 10.1109/ICCV.2015.318 %D 2015 %B IEEE International Conference on Computer Vision %Z date of event: 2015-12-13 - 2015-12-16 %C Santiago, Chile %B ICCV 2015 %P 2776 - 2764 %I IEEE %@ 978-1-4673-8390-5 %U http://www.cv-foundation.org/openaccess/content_iccv_2015/html/Kim_Context-Guided_Diffusion_for_ICCV_2015_paper.html
Klehm, O., Rousselle, F., Papas, M., et al. 2015a. Recent Advances in Facial Appearance Capture. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{Klehm2015Recent, TITLE = {Recent Advances in Facial Appearance Capture}, AUTHOR = {Klehm, Oliver and Rousselle, Fabrice and Papas, Marios and Bradley, Derek and Hery, Christophe and Bickel, Bernd and Wojciech, Jarosz and Beeler, Thabo}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12594}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {709--733}, BOOKTITLE = {The 36th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Klehm, Oliver %A Rousselle, Fabrice %A Papas, Marios %A Bradley, Derek %A Hery, Christophe %A Bickel, Bernd %A Wojciech, Jarosz %A Beeler, Thabo %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations %T Recent Advances in Facial Appearance Capture : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-5042-A %R 10.1111/cgf.12594 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 709 %P 709 - 733 %I Wiley-Blackwell %C Oxford %@ false %B The 36th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 Z&#252;rich, Switzerland ; May 4th &#8211; 8th, 2015
Klehm, O., Kol, T.R., Seidel, H.-P., and Eisemann, E. 2015b. Stylized Scattering via Transfer Functions and Occluder Manipulation. Graphics Interface 2015, Graphics Interface Conference, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{KlehmGI2015, TITLE = {Stylized Scattering via Transfer Functions and Occluder Manipulation}, AUTHOR = {Klehm, Oliver and Kol, Timothy R. and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISBN = {978-0-9947868-0-7}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Graphics Interface 2015, Graphics Interface Conference}, EDITOR = {Zhang, Hao Richard and Tang, Tony}, PAGES = {115--121}, ADDRESS = {Halifax, Canada}, }
Endnote
%0 Conference Proceedings %A Klehm, Oliver %A Kol, Timothy R. %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Stylized Scattering via Transfer Functions and Occluder Manipulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D415-8 %D 2015 %B Graphics Interface Conference %Z date of event: 2015-06-03 - 2015-06-05 %C Halifax, Canada %B Graphics Interface 2015 %E Zhang, Hao Richard; Tang, Tony %P 115 - 121 %I Canadian Information Processing Society %@ 978-0-9947868-0-7
Klehm, O. 2015. User-Guided Scene Stylization using Efficient Rendering Technique. urn:nbn:de:bsz:291-scidok-65321.
Export
BibTeX
@phdthesis{Klehmphd2016, TITLE = {User-Guided Scene Stylization using Efficient Rendering Technique}, AUTHOR = {Klehm, Oliver}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65321}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Klehm, Oliver %Y Seidel, Hans-Peter %A referee: Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T User-Guided Scene Stylization using Efficient Rendering Technique : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-9C13-A %U urn:nbn:de:bsz:291-scidok-65321 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XIII, 111 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6532/
Kwon, Y., Kim, K.I., Tompkin, J., Kim, J.H., and Theobalt, C. 2015. Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local Gaussian Processes. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 9.
Export
BibTeX
@article{Kwon:2014:TPAMI, TITLE = {Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local {G}aussian Processes}, AUTHOR = {Kwon, Younghee and Kim, Kwang In and Tompkin, James and Kim, Jin Hyung and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0162-8828}, DOI = {10.1109/TPAMI.2015.2389797}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, VOLUME = {37}, NUMBER = {9}, PAGES = {1792--1805}, }
Endnote
%0 Journal Article %A Kwon, Younghee %A Kim, Kwang In %A Tompkin, James %A Kim, Jin Hyung %A Theobalt, Christian %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local Gaussian Processes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF0A-3 %R 10.1109/TPAMI.2015.2389797 %7 2015-01-09 %D 2015 %J IEEE Transactions on Pattern Analysis and Machine Intelligence %O IEEE Trans. Pattern Anal. Mach. Intell. %V 37 %N 9 %& 1792 %P 1792 - 1805 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Li, C., Wand, M., Wu, X., and Seidel, H.-P. 2015. Approximate 3D Partial Symmetry Detection Using Co-occurrence Analysis. International Conference on 3D Vision, IEEE.
Export
BibTeX
@inproceedings{Li3DV2015, TITLE = {Approximate {3D} Partial Symmetry Detection Using Co-occurrence Analysis}, AUTHOR = {Li, Chuan and Wand, Michael and Wu, Xiaokun and Seidel, Hans-Peter}, ISBN = {978-1-4673-8333-2}, DOI = {10.1109/3DV.2015.55}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {International Conference on 3D Vision}, DEBUG = {author: Theobalt, Christian}, EDITOR = {Brown, Michael and Kosecka, Jana}, PAGES = {425--433}, ADDRESS = {Lyon, France}, }
Endnote
%0 Conference Proceedings %A Li, Chuan %A Wand, Michael %A Wu, Xiaokun %A Seidel, Hans-Peter %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Approximate 3D Partial Symmetry Detection Using Co-occurrence Analysis : %U http://hdl.handle.net/11858/00-001M-0000-002B-34D8-0 %R 10.1109/3DV.2015.55 %D 2015 %B International Conference on 3D Vision %Z date of event: 2015-10-19 - 2015-10-22 %C Lyon, France %B International Conference on 3D Vision %E Brown, Michael; Kosecka, Jana; Theobalt, Christian %P 425 - 433 %I IEEE %@ 978-1-4673-8333-2
Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2015. High Dynamic Range Imaging. In: Wiley Encyclopedia of Electrical and Electronics Engineering. Wiley, New York, NY.
Export
BibTeX
@incollection{MantiukEncyclopedia2015, TITLE = {High Dynamic Range Imaging}, AUTHOR = {Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1002/047134608X.W8265}, PUBLISHER = {Wiley}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Wiley Encyclopedia of Electrical and Electronics Engineering}, EDITOR = {Webster, John G.}, PAGES = {1--42}, }
Endnote
%0 Book Section %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-A376-B %R 10.1002/047134608X.W8265 %D 2015 %8 15.06.2015 %B Wiley Encyclopedia of Electrical and Electronics Engineering %E Webster, John G. %P 1 - 42 %I Wiley %C New York, NY
Masia, B., Serrano, A., and Gutierrez, D. 2015. Dynamic Range Expansion Based on Image Statistics. Multimedia Tools and Applications.
Export
BibTeX
@article{RTM_MMTA2015, TITLE = {Dynamic Range Expansion Based on Image Statistics}, AUTHOR = {Masia, Belen and Serrano, Ana and Gutierrez, Diego}, LANGUAGE = {eng}, ISSN = {1380-7501}, DOI = {10.1007/s11042-015-3036-0}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, JOURNAL = {Multimedia Tools and Applications}, PAGES = {1--18}, }
Endnote
%0 Journal Article %A Masia, Belen %A Serrano, Ana %A Gutierrez, Diego %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Dynamic Range Expansion Based on Image Statistics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-78ED-4 %R 10.1007/s11042-015-3036-0 %7 2015-11-17 %D 2015 %8 17.11.2015 %J Multimedia Tools and Applications %& 1 %P 1 - 18 %I Springer %C New York, NY %@ false
Michels, D.L. and Desbrun, M. 2015. A Semi-analytical Approach to Molecular Dynamics. Journal of Computational Physics 303.
Export
BibTeX
@article{Michels2015, TITLE = {A Semi-analytical Approach to Molecular Dynamics}, AUTHOR = {Michels, Dominik L. and Desbrun, Mathieu}, LANGUAGE = {eng}, ISSN = {0021-9991}, DOI = {10.1016/j.jcp.2015.10.009}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Journal of Computational Physics}, VOLUME = {303}, PAGES = {336--354}, }
Endnote
%0 Journal Article %A Michels, Dominik L. %A Desbrun, Mathieu %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T A Semi-analytical Approach to Molecular Dynamics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-34FB-C %R 10.1016/j.jcp.2015.10.009 %7 2015 %D 2015 %J Journal of Computational Physics %V 303 %& 336 %P 336 - 354 %I Elsevier %C Amsterdam %@ false
Nalbach, O., Ritschel, T., and Seidel, H.-P. 2015. The Bounced Z-buffer for Indirect Visibility. VMV 2015 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{NalbachVMV2015, TITLE = {The Bounced {Z}-buffer for Indirect Visibility}, AUTHOR = {Nalbach, Oliver and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-905674-95-8}, DOI = {10.2312/vmv.20151261}, PUBLISHER = {Eurographics Association}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {VMV 2015 Vision, Modeling and Visualization}, EDITOR = {Bommes, David and Ritschel, Tobias and Schultz, Thomas}, PAGES = {79--86}, ADDRESS = {Aachen, Germany}, }
Endnote
%0 Conference Proceedings %A Nalbach, Oliver %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T The Bounced Z-buffer for Indirect Visibility : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-F762-F %R 10.2312/vmv.20151261 %D 2015 %B 20th International Symposium on Vision, Modeling and Visualization %Z date of event: 2015-10-07 - 2015-10-09 %C Aachen, Germany %B VMV 2015 Vision, Modeling and Visualization %E Bommes, David; Ritschel, Tobias; Schultz, Thomas %P 79 - 86 %I Eurographics Association %@ 978-3-905674-95-8
Nguyen, C., Nalbach, O., Ritschel, T., and Seidel, H.-P. 2015a. Guiding Image Manipulations Using Shape-appearance Subspaces from Co-alignment of Image Collections. Computer Graphics Forum (Proc. EUROGRAPHICS 2015) 34, 2.
Export
BibTeX
@article{NguyenEG2015, TITLE = {Guiding Image Manipulations Using Shape-appearance Subspaces from Co-alignment of Image Collections}, AUTHOR = {Nguyen, Chuong and Nalbach, Oliver and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12548}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {34}, NUMBER = {2}, PAGES = {143--154}, BOOKTITLE = {The 36th Annual Conference of the European Association of Computer Graphics (EUROGRAPHICS 2015)}, }
Endnote
%0 Journal Article %A Nguyen, Chuong %A Nalbach, Oliver %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Guiding Image Manipulations Using Shape-appearance Subspaces from Co-alignment of Image Collections : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D6A-0 %R 10.1111/cgf.12548 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 2 %& 143 %P 143 - 154 %I Wiley-Blackwell %C Oxford %B The 36th Annual Conference of the European Association of Computer Graphics %O EUROGRAPHICS 2015 4th &#8211; 8th May 2015, Kongresshaus in Z&#252;rich, Switzerland EG 2015
Nguyen, C., Ritschel, T., and Seidel, H.-P. 2015b. Data-driven Color Manifolds. ACM Transactions on Graphics 34, 2.
Export
BibTeX
@article{NguyenTOG2015, TITLE = {Data-driven Color Manifolds}, AUTHOR = {Nguyen, Chuong and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1145/2699645}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {34}, NUMBER = {2}, EID = {20}, }
Endnote
%0 Journal Article %A Nguyen, Chuong %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Data-driven Color Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-680A-D %R 10.1145/2699645 %7 2015 %D 2015 %J ACM Transactions on Graphics %V 34 %N 2 %Z sequence number: 20 %I ACM %C New York, NY
Nguyen, C. 2015. Data-driven Approaches for Interactive Appearance Editing. urn:nbn:de:bsz:291-scidok-62372.
Export
BibTeX
@phdthesis{NguyenPhD2015, TITLE = {Data-driven Approaches for Interactive Appearance Editing}, AUTHOR = {Nguyen, Chuong}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-62372}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Nguyen, Chuong %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Data-driven Approaches for Interactive Appearance Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-9C47-9 %U urn:nbn:de:bsz:291-scidok-62372 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XVII, 134 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6237/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Olberding, S. 2015. Fabricating Custom-shaped Thin-film Interactive Surfaces. urn:nbn:de:bsz:291-scidok-63285.
Export
BibTeX
@phdthesis{OlberdingPhD2015, TITLE = {Fabricating Custom-shaped Thin-film Interactive Surfaces}, AUTHOR = {Olberding, Simon}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-63285}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Olberding, Simon %Y Steimle, J&#252;rgen %A referee: Kr&#252;ger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Fabricating Custom-shaped Thin-film Interactive Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5EF8-2 %U urn:nbn:de:bsz:291-scidok-63285 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XVI, 145 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6328/
Olberding, S., Ortega, S.S., Hildebrandt, K., and Steimle, J. 2015. Foldio: Digital Fabrication of Interactive and Shape-changing Objects With Foldable Printed Electronics. UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, ACM.
Export
BibTeX
@inproceedings{OlberdingUIST2015, TITLE = {Foldio: {D}igital Fabrication of Interactive and Shape-changing Objects With Foldable Printed Electronics}, AUTHOR = {Olberding, Simon and Ortega, Sergio Soto and Hildebrandt, Klaus and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, DOI = {10.1145/2807442.2807494}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {UIST'15, 28th Annual ACM Symposium on User Interface Software and Technology}, PAGES = {223--232}, ADDRESS = {Charlotte, NC, USA}, }
Endnote
%0 Conference Proceedings %A Olberding, Simon %A Ortega, Sergio Soto %A Hildebrandt, Klaus %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Foldio: Digital Fabrication of Interactive and Shape-changing Objects With Foldable Printed Electronics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-6646-D %R 10.1145/2807442.2807494 %D 2015 %B 28th Annual ACM Symposium on User Interface Software and Technology %Z date of event: 2015-11-08 - 2015-11-11 %C Charlotte, NC, USA %B UIST'15 %P 223 - 232 %I ACM
Pepik, B. 2015. Richer Object Representations for Object Class Detection in Challenging Real World Image. .
Export
BibTeX
@phdthesis{Pepikphd15, TITLE = {Richer Object Representations for Object Class Detection in Challenging Real World Image}, AUTHOR = {Pepik, Bojan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Pepik, Bojan %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Richer Object Representations for Object Class Detection in Challenging Real World Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-7678-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P xii, 219 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6361/
Pepik, B., Benenson, R., Ritschel, T., and Schiele, B. 2015a. What is Holding Back Convnets for Detection? Pattern Recognition (GCPR 2015), Springer.
Export
BibTeX
@inproceedings{Pepik2015GCPR, TITLE = {What is Holding Back Convnets for Detection?}, AUTHOR = {Pepik, Bojan and Benenson, Rodrigo and Ritschel, Tobias and Schiele, Bernt}, LANGUAGE = {eng}, ISBN = {978-3-319-24946-9}, DOI = {10.1007/978-3-319-24947-6_43}, PUBLISHER = {Springer}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Pattern Recognition (GCPR 2015)}, EDITOR = {Gall, J{\"u}rgen and Gehler, Peter and Leibe, Bastian}, PAGES = {517--528}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9358}, ADDRESS = {Aachen, Germany}, }
Endnote
%0 Conference Proceedings %A Pepik, Bojan %A Benenson, Rodrigo %A Ritschel, Tobias %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T What is Holding Back Convnets for Detection? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5912-C %R 10.1007/978-3-319-24947-6_43 %D 2015 %B 37th German Conference on Pattern Recognition %Z date of event: 2015-10-07 - 2015-10-10 %C Aachen, Germany %B Pattern Recognition %E Gall, J&#252;rgen; Gehler, Peter; Leibe, Bastian %P 517 - 528 %I Springer %@ 978-3-319-24946-9 %B Lecture Notes in Computer Science %N 9358
Pepik, B., Stark, M., Gehler, P., Ritschel, T., and Schiele, B. 2015b. 3D Object Class Detection in the Wild. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (3DSI 2015), IEEE.
Export
BibTeX
@inproceedings{Pepik3DSI2015, TITLE = {{3D} Object Class Detection in the Wild}, AUTHOR = {Pepik, Bojan and Stark, Michael and Gehler, Peter and Ritschel, Tobias and Schiele, Bernt}, LANGUAGE = {eng}, DOI = {10.1109/CVPRW.2015.7301358}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (3DSI 2015)}, PAGES = {1--10}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Pepik, Bojan %A Stark, Michael %A Gehler, Peter %A Ritschel, Tobias %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T 3D Object Class Detection in the Wild : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5935-D %R 10.1109/CVPRW.2015.7301358 %D 2015 %B Workshop on 3D from a Single Image %Z date of event: 2015-06-07 - 2015-07-12 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) %P 1 - 10 %I IEEE
Pishchulin, L., Wuhrer, S., Helten, T., Theobalt, C., and Schiele, B. 2015. Building Statistical Shape Spaces for 3D Human Modeling. http://arxiv.org/abs/1503.05860.
(arXiv: 1503.05860)
Abstract
Statistical models of 3D human shape and pose learned from scan databases have developed into valuable tools to solve a variety of vision and graphics problems. Unfortunately, most publicly available models are of limited expressiveness as they were learned on very small databases that hardly reflect the true variety in human body shapes. In this paper, we contribute by rebuilding a widely used statistical body representation from the largest commercially available scan database, and making the resulting model available to the community (visit http://humanshape.mpi-inf.mpg.de). As preprocessing several thousand scans for learning the model is a challenge in itself, we contribute by developing robust best practice solutions for scan alignment that quantitatively lead to the best learned models. We make implementations of these preprocessing steps also publicly available. We extensively evaluate the improved accuracy and generality of our new model, and show its improved performance for human body reconstruction from sparse input data.
Export
BibTeX
@online{941x, TITLE = {Building Statistical Shape Spaces for {3D} Human Modeling}, AUTHOR = {Pishchulin, Leonid and Wuhrer, Stefanie and Helten, Thomas and Theobalt, Christian and Schiele, Bernt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1503.05860}, EPRINT = {1503.05860}, EPRINTTYPE = {arXiv}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Statistical models of 3D human shape and pose learned from scan databases have developed into valuable tools to solve a variety of vision and graphics problems. Unfortunately, most publicly available models are of limited expressiveness as they were learned on very small databases that hardly reflect the true variety in human body shapes. In this paper, we contribute by rebuilding a widely used statistical body representation from the largest commercially available scan database, and making the resulting model available to the community (visit http://humanshape.mpi-inf.mpg.de). As preprocessing several thousand scans for learning the model is a challenge in itself, we contribute by developing robust best practice solutions for scan alignment that quantitatively lead to the best learned models. We make implementations of these preprocessing steps also publicly available. We extensively evaluate the improved accuracy and generality of our new model, and show its improved performance for human body reconstruction from sparse input data.}, }
Endnote
%0 Report %A Pishchulin, Leonid %A Wuhrer, Stefanie %A Helten, Thomas %A Theobalt, Christian %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Building Statistical Shape Spaces for 3D Human Modeling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-4B26-F %U http://arxiv.org/abs/1503.05860 %D 2015 %X Statistical models of 3D human shape and pose learned from scan databases have developed into valuable tools to solve a variety of vision and graphics problems. Unfortunately, most publicly available models are of limited expressiveness as they were learned on very small databases that hardly reflect the true variety in human body shapes. In this paper, we contribute by rebuilding a widely used statistical body representation from the largest commercially available scan database, and making the resulting model available to the community (visit http://humanshape.mpi-inf.mpg.de). As preprocessing several thousand scans for learning the model is a challenge in itself, we contribute by developing robust best practice solutions for scan alignment that quantitatively lead to the best learned models. We make implementations of these preprocessing steps also publicly available. We extensively evaluate the improved accuracy and generality of our new model, and show its improved performance for human body reconstruction from sparse input data. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Rhodin, H., Tompkin, J., Kim, K.I., et al. 2015a. Generalizing Wave Gestures from Sparse Examples for Real-time Character Control. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{RhodinSAP2015, TITLE = {Generalizing Wave Gestures from Sparse Examples for Real-time Character Control}, AUTHOR = {Rhodin, Helge and Tompkin, James and Kim, Kwang In and de Aguiar, Edilson and Pfister, Hanspeter and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818082}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--12}, EID = {181}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Rhodin, Helge %A Tompkin, James %A Kim, Kwang In %A de Aguiar, Edilson %A Pfister, Hanspeter %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Generalizing Wave Gestures from Sparse Examples for Real-time Character Control : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2476-8 %R 10.1145/2816795.2818082 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 12 %Z sequence number: 181 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan
Rhodin, H., Robertini, N., Richardt, C., Seidel, H.-P., and Theobalt, C. 2015b. A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation. ICCV 2015, IEEE International Conference on Computer Vision, IEEE.
Export
BibTeX
@inproceedings{RhodinICCV2015, TITLE = {A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation}, AUTHOR = {Rhodin, Helge and Robertini, Nadia and Richardt, Christian and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4673-8390-5}, DOI = {10.1109/ICCV.2015.94}, PUBLISHER = {IEEE}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {ICCV 2015, IEEE International Conference on Computer Vision}, PAGES = {765--773}, ADDRESS = {Santiago, Chile}, }
Endnote
%0 Conference Proceedings %A Rhodin, Helge %A Robertini, Nadia %A Richardt, Christian %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Versatile Scene Model With Differentiable Visibility Applied to Generative Pose Estimation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-52DC-4 %R 10.1109/ICCV.2015.94 %D 2015 %B IEEE International Conference on Computer Vision %Z date of event: 2015-12-13 - 2015-12-16 %C Santiago, Chile %B ICCV 2015 %P 765 - 773 %I IEEE %@ 978-1-4673-8390-5 %U http://www.cv-foundation.org/openaccess/content_iccv_2015/html/Rhodin_A_Versatile_Scene_ICCV_2015_paper.html
Richardt, C., Tompkin, J., Bai, J., and Theobalt, C. 2015. User-centric Computational Videography. ACM SIGGRAPH 2015 Courses, ACM.
Export
BibTeX
@inproceedings{richardtSIGGRAPHCourse2015, TITLE = {User-centric Computational Videography}, AUTHOR = {Richardt, Christian and Tompkin, James and Bai, Jiamin and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4503-3634-5}, DOI = {10.1145/2776880.2792705}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {ACM SIGGRAPH 2015 Courses}, PAGES = {1--6}, EID = {25}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Richardt, Christian %A Tompkin, James %A Bai, Jiamin %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T User-centric Computational Videography : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5460-2 %R 10.1145/2776880.2792705 %D 2015 %B The 42nd International Conference and Exhibition on Computer Graphics and Interactive Techniques %Z date of event: 2015-08-09 - 2015-08-13 %C Los Angeles, CA, USA %B ACM SIGGRAPH 2015 Courses %P 1 - 6 %Z sequence number: 25 %I ACM %@ 978-1-4503-3634-5
Schmitz, M., Khalibeigi, M., Balwierz, M., Lissermann, R., Mühlhäuser, M., and Steimle, J. 2015. Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects. UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, ACM.
Export
BibTeX
@inproceedings{SchmitzUIST2015, TITLE = {Capricate: {A} Fabrication Pipeline to Design and {3D} Print Capacitive Touch Sensors for Interactive Objects}, AUTHOR = {Schmitz, Martin and Khalibeigi, Mohammadreza and Balwierz, Matthias and Lissermann, Roman and M{\"u}hlh{\"a}user, Max and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-3779-3}, DOI = {10.1145/2807442.2807503}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {UIST'15, 28th Annual ACM Symposium on User Interface Software and Technology}, PAGES = {253--258}, ADDRESS = {Charlotte, NC, USA}, }
Endnote
%0 Conference Proceedings %A Schmitz, Martin %A Khalibeigi, Mohammadreza %A Balwierz, Matthias %A Lissermann, Roman %A M&#252;hlh&#228;user, Max %A Steimle, J&#252;rgen %+ External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-664A-5 %R 10.1145/2807442.2807503 %D 2015 %B 28th Annual ACM Symposium on User Interface Software and Technology %Z date of event: 2015-11-08 - 2015-11-11 %C Charlotte, NC, USA %B UIST'15 %P 253 - 258 %I ACM %@ 978-1-4503-3779-3
Schulz, C., von Tycowicz, C., Seidel, H.-P., and Hildebrandt, K. 2015. Animating Articulated Characters Using Wiggly Splines. Proceedings SCA 2015, ACM.
Export
BibTeX
@inproceedings{SchulzSCA2015, TITLE = {Animating Articulated Characters Using Wiggly Splines}, AUTHOR = {Schulz, Christian and von Tycowicz, Christoph and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, ISBN = {978-1-4503-3496-9}, DOI = {10.1145/2786784.2786799}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Proceedings SCA 2015}, PAGES = {101--109}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Schulz, Christian %A von Tycowicz, Christoph %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Animating Articulated Characters Using Wiggly Splines : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-8EA3-0 %R 10.1145/2786784.2786799 %D 2015 %B 14th ACM SIGGRAPH / Eurographics Symposium on Computer Animation %Z date of event: 2015-08-07 - 2015-08-09 %C Los Angeles, CA, USA %B Proceedings SCA 2015 %P 101 - 109 %I ACM %@ 978-1-4503-3496-9
Sridhar, S., Feit, A.M., Theobalt, C., and Oulasvirta, A. 2015a. Investigating the Dexterity of Multi-finger Input for Mid-air Text Entry. CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{sridhar_investigating_2015, TITLE = {Investigating the Dexterity of Multi-finger Input for Mid-air Text Entry}, AUTHOR = {Sridhar, Srinath and Feit, Anna Maria and Theobalt, Christian and Oulasvirta, Antti}, LANGUAGE = {eng}, ISBN = {978-1-4503-3145-6}, DOI = {10.1145/2702123.2702136}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems}, PAGES = {3643--3652}, ADDRESS = {Seoul, Korea}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A Feit, Anna Maria %A Theobalt, Christian %A Oulasvirta, Antti %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Investigating the Dexterity of Multi-finger Input for Mid-air Text Entry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF7B-3 %R 10.1145/2702123.2702136 %D 2015 %B 33rd ACM SIGCHI Conference on Human Factors in Computing Systems %Z date of event: 2015-04-18 - 2015-04-23 %C Seoul, Korea %B CHI 2015 %P 3643 - 3652 %I ACM %@ 978-1-4503-3145-6
Sridhar, S., Müller, F., Oulasvirta, A., and Theobalt, C. 2015b. Fast and Robust Hand Tracking Using Detection-Guided Optimization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), IEEE Computer Society.
Export
BibTeX
@inproceedings{Sridhar15cvpr, TITLE = {Fast and Robust Hand Tracking Using Detection-Guided Optimization}, AUTHOR = {Sridhar, Srinath and M{\"u}ller, Franziska and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, DOI = {10.1109/CVPR.2015.7298941}, PUBLISHER = {IEEE Computer Society}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)}, PAGES = {3213--3221}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A M&#252;ller, Franziska %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Fast and Robust Hand Tracking Using Detection-Guided Optimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5456-9 %R 10.1109/CVPR.2015.7298941 %D 2015 %B IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2015-06-07 - 2015-06-12 %C Boston, MA, USA %B IEEE Conference on Computer Vision and Pattern Recognition %P 3213 - 3221 %I IEEE Computer Society
Steimle, J. 2015. Printed Electronics for Human-Computer Interaction. Interactions 22, 3.
Export
BibTeX
@article{SteimlePrinted, TITLE = {Printed Electronics for Human-Computer Interaction}, AUTHOR = {Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISSN = {1072-5520}, DOI = {10.1145/2754304}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Interactions}, VOLUME = {22}, NUMBER = {3}, PAGES = {72--75}, }
Endnote
%0 Journal Article %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Printed Electronics for Human-Computer Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-6642-6 %R 10.1145/2754304 %7 2015 %D 2015 %J Interactions %V 22 %N 3 %& 72 %P 72 - 75 %I ACM %C New York, NY %@ false
Sung, M., Kim, V.G., Angst, R., and Guibas, L. 2015. Data-driven Structural Priors for Shape Completion. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{SungSIGGRAPHAsia2015, TITLE = {Data-driven Structural Priors for Shape Completion}, AUTHOR = {Sung, Minhyuk and Kim, Vladimir G. and Angst, Roland and Guibas, Leonidas}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818094}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--11}, EID = {175}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Sung, Minhyuk %A Kim, Vladimir G. %A Angst, Roland %A Guibas, Leonidas %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Data-driven Structural Priors for Shape Completion : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-07CC-0 %R 10.1145/2816795.2818094 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 11 %Z sequence number: 175 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan
Templin, K. 2015. Depth, Shading, and Stylization in Stereoscopic Cinematograph. .
Export
BibTeX
@phdthesis{Templinphd15, TITLE = {Depth, Shading, and Stylization in Stereoscopic Cinematograph}, AUTHOR = {Templin, Krzysztof}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Templin, Krzysztof %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Depth, Shading, and Stylization in Stereoscopic Cinematograph : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-19FA-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P xii, 100 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6439/
Thies, J., Zollhöfer, M., Nießner, M., Valgaerts, L., Stamminger, M., and Theobalt, C. 2015. Real-time Expression Transfer for Facial Reenactment. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{ThiesSAP2015, TITLE = {Real-time Expression Transfer for Facial Reenactment}, AUTHOR = {Thies, Justus and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Valgaerts, Levi and Stamminger, Marc and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818056}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--14}, EID = {183}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Thies, Justus %A Zollh&#246;fer, Michael %A Nie&#223;ner, Matthias %A Valgaerts, Levi %A Stamminger, Marc %A Theobalt, Christian %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Expression Transfer for Facial Reenactment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2478-4 %R 10.1145/2816795.2818056 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 14 %Z sequence number: 183 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan
Vangorp, P., Myszkowski, K., Graf, E., and Mantiuk, R. 2015a. An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation). Perception (Proc. ECVP 2015) 44, S1.
Export
BibTeX
@article{VangeropECVP2015, TITLE = {An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation)}, AUTHOR = {Vangorp, Peter and Myszkowski, Karol and Graf, Erich and Mantiuk, Rafa{\l}}, LANGUAGE = {eng}, ISSN = {0301-0066}, DOI = {10.1177/0301006615598674}, PUBLISHER = {SAGE}, ADDRESS = {London}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015-08}, JOURNAL = {Perception (Proc. ECVP)}, VOLUME = {44}, NUMBER = {S1}, PAGES = {98--98}, EID = {1T3C001}, BOOKTITLE = {38th European Conference on Visual Perception (ECVP 2015)}, }
Endnote
%0 Journal Article %A Vangorp, Peter %A Myszkowski, Karol %A Graf, Erich %A Mantiuk, Rafa&#322; %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-245C-4 %R 10.1177/0301006615598674 %7 2015 %D 2015 %J Perception %V 44 %N S1 %& 98 %P 98 - 98 %Z sequence number: 1T3C001 %I SAGE %C London %@ false %B 38th European Conference on Visual Perception %O ECVP 2015 Liverpool
Vangorp, P., Myszkowski, K., Graf, E.W., and Mantiuk, R.K. 2015b. A Model of Local Adaptation. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{Vangorp:2015:LocalAdaptationSIGAsia, TITLE = {A Model of Local Adaptation}, AUTHOR = {Vangorp, Peter and Myszkowski, Karol and Graf, Erich W. and Mantiuk, Rafa{\l} K.}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818086}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--13}, EID = {166}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Vangorp, Peter %A Myszkowski, Karol %A Graf, Erich W. %A Mantiuk, Rafa&#322; K. %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Model of Local Adaptation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2455-1 %R 10.1145/2816795.2818086 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 13 %Z sequence number: 166 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan %U http://resources.mpi-inf.mpg.de/LocalAdaptation/
Von Tycowicz, C., Schulz, C., Seidel, H.-P., and Hildebrandt, K. 2015. Real-time Nonlinear Shape Interpolation. ACM Transactions on Graphics 34, 3.
Export
BibTeX
@article{Tycowicz2015, TITLE = {Real-time Nonlinear Shape Interpolation}, AUTHOR = {von Tycowicz, Christoph and Schulz, Christian and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, DOI = {10.1145/2729972}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {34}, NUMBER = {3}, EID = {34}, }
Endnote
%0 Journal Article %A von Tycowicz, Christoph %A Schulz, Christian %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Nonlinear Shape Interpolation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D65-9 %R 10.1145/2729972 %7 2015 %D 2015 %J ACM Transactions on Graphics %V 34 %N 3 %Z sequence number: 34 %I ACM %C New York, NY
Wang, Z. 2015. Pattern Search for the Visualization of Scalar, Vector, and Line Fields. .
Export
BibTeX
@phdthesis{WangPhd15, TITLE = {Pattern Search for the Visualization of Scalar, Vector, and Line Fields}, AUTHOR = {Wang, Zhongjie}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Wang, Zhongjie %Y Seidel, Hans-Peter %A referee: Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Pattern Search for the Visualization of Scalar, Vector, and Line Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A5-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 103 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6330/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Wang, Z., Seidel, H.-P., and Weinkauf, T. 2015. Hierarchical Hashing for Pattern Search in 3D Vector Fields. VMV 2015 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{WangVMV2015, TITLE = {Hierarchical Hashing for Pattern Search in {3D} Vector Fields}, AUTHOR = {Wang, Zhongjie and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-3-905674-95-8}, DOI = {10.2312/vmv.20151256}, PUBLISHER = {Eurographics Association}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {VMV 2015 Vision, Modeling and Visualization}, EDITOR = {Bommes, David and Ritschel, Tobias and Schultz, Thomas}, PAGES = {41--48}, ADDRESS = {Aachen, Germany}, }
Endnote
%0 Conference Proceedings %A Wang, Zhongjie %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Hierarchical Hashing for Pattern Search in 3D Vector Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-F760-4 %R 10.2312/vmv.20151256 %D 2015 %B 20th International Symposium on Vision, Modeling and Visualization %Z date of event: 2015-10-07 - 2015-10-09 %C Aachen, Germany %B VMV 2015 Vision, Modeling and Visualization %E Bommes, David; Ritschel, Tobias; Schultz, Thomas %P 41 - 48 %I Eurographics Association %@ 978-3-905674-95-8
Weigel, M., Lu, T., Oulasvirta, A., Bailly, G., Majidi, C., and Steimle, J. 2015. iSkin: Flexible, Stretchable and Visually Customizable On-body Touch Sensors for Mobile Computing. CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{Weigel2015, TITLE = {{iSkin}: {Flexible}, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing}, AUTHOR = {Weigel, Martin and Lu, Tong and Oulasvirta, Antti and Bailly, Gilles and Majidi, Carmel and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-3145-6}, DOI = {10.1145/2702123.2702391}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {CHI 2015, 33rd ACM SIGCHI Conference on Human Factors in Computing Systems}, PAGES = {2991--3000}, ADDRESS = {Seoul, Korea}, }
Endnote
%0 Conference Proceedings %A Weigel, Martin %A Lu, Tong %A Oulasvirta, Antti %A Bailly, Gilles %A Majidi, Carmel %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T iSkin: Flexible, Stretchable and Visually Customizable On-body Touch Sensors for Mobile Computing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFA0-C %R 10.1145/2702123.2702391 %D 2015 %B 33rd ACM SIGCHI Conference on Human Factors in Computing Systems %Z date of event: 2015-04-18 - 2015-04-23 %C Seoul, Korea %B CHI 2015 %P 2991 - 3000 %I ACM %@ 978-1-4503-3145-6
Zollhöfer, M., Dai, A., Innmann, M., et al. 2015. Shading-based Refinement on Volumetric Signed Distance Functions. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2015) 34, 4.
Export
BibTeX
@article{ZollhoeferSIGGRAPH2015, TITLE = {Shading-based Refinement on Volumetric Signed Distance Functions}, AUTHOR = {Zollh{\"o}fer, Michael and Dai, Angela and Innmann, Matthias and Wu, Chenglei and Stamminger, Marc and Theobalt, Christian and Nie{\ss}ner, Matthias}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2766887}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {34}, NUMBER = {4}, EID = {96}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2015}, }
Endnote
%0 Journal Article %A Zollh&#246;fer, Michael %A Dai, Angela %A Innmann, Matthias %A Wu, Chenglei %A Stamminger, Marc %A Theobalt, Christian %A Nie&#223;ner, Matthias %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Shading-based Refinement on Volumetric Signed Distance Functions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-528D-5 %R 10.1145/2766887 %7 2015 %D 2015 %J ACM Transactions on Graphics %V 34 %N 4 %Z sequence number: 96 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2015 %O ACM SIGGRAPH 2015 Los Angeles, California
2014
Åkesson, S., Odin, C., Hegedüs, R., et al. 2014. Testing Avian Compass Calibration: Comparative Experiments with Diurnal and Nocturnal Passerine Migrants in South Sweden. Biology Open 4, 1.
Export
BibTeX
@article{Hegedus2014BiologyOpen, TITLE = {Testing Avian Compass Calibration: {C}omparative Experiments with Diurnal and Nocturnal Passerine Migrants in {S}outh {S}weden}, AUTHOR = {{\AA}kesson, Susanne and Odin, Catharina and Heged{\"u}s, Ramon and Ilieva, Mihaela and Sj{\"o}holm, Christoffer and Farkas, Alexandra and Horv{\'a}th, G{\'a}bor}, LANGUAGE = {eng}, ISSN = {2046-6390}, URL = {http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4295164&tool=pmcentrez&rendertype=abstract}, DOI = {10.1242/bio.20149837}, PUBLISHER = {The Company of Biologists}, ADDRESS = {Cambridge}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, JOURNAL = {Biology Open}, VOLUME = {4}, NUMBER = {1}, PAGES = {35--47}, }
Endnote
%0 Journal Article %A &#197;kesson, Susanne %A Odin, Catharina %A Heged&#252;s, Ramon %A Ilieva, Mihaela %A Sj&#246;holm, Christoffer %A Farkas, Alexandra %A Horv&#225;th, G&#225;bor %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Testing Avian Compass Calibration: Comparative Experiments with Diurnal and Nocturnal Passerine Migrants in South Sweden : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-CD29-7 %2 PMC4295164 %F OTHER: publisher-idBIO20149837 %R 10.1242/bio.20149837 %U http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4295164&tool=pmcentrez&rendertype=abstract %7 2014-12-12 %D 2014 %8 12.12.2014 %K Erithacus rubecula %J Biology Open %V 4 %N 1 %& 35 %P 35 - 47 %I The Company of Biologists %C Cambridge %@ false
Athukorala, K., Oulasvirta, A., Glowacka, D., Vreeken, J., and Jaccuci, G. 2014a. Interaction Model to Predict Subjective-specificity of Search Results. UMAP 2014 Extended Proceedings, CEUR-WS.org.
Export
BibTeX
@inproceedings{atukorala:14:interaction, TITLE = {Interaction Model to Predict Subjective-specificity of Search Results}, AUTHOR = {Athukorala, Kumaripaba and Oulasvirta, Antti and Glowacka, Dorata and Vreeken, Jilles and Jaccuci, Giulio}, LANGUAGE = {eng}, URL = {http://ceur-ws.org/Vol-1181/umap2014_lateresults_01.pdf; urn:nbn:de:0074-1181-4}, PUBLISHER = {CEUR-WS.org}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {UMAP 2014 Extended Proceedings}, EDITOR = {Cantador, Iv{\'a}n and Chi, Min and Farzan, Rosta and J{\"a}schke, Robert}, PAGES = {69--74}, SERIES = {CEUR Workshop Proceedings}, VOLUME = {1181}, ADDRESS = {Aalborg, Denmark}, }
Endnote
%0 Conference Proceedings %A Athukorala, Kumaripaba %A Oulasvirta, Antti %A Glowacka, Dorata %A Vreeken, Jilles %A Jaccuci, Giulio %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Interaction Model to Predict Subjective-specificity of Search Results : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5397-D %U http://ceur-ws.org/Vol-1181/umap2014_lateresults_01.pdf %D 2014 %B 22nd Conference on User Modeling, Adaptation, and Personalization %Z date of event: 2014-07-07 - 2014-07-11 %C Aalborg, Denmark %B UMAP 2014 Extended Proceedings %E Cantador, Iv&#225;n; Chi, Min; Farzan, Rosta; J&#228;schke, Robert %P 69 - 74 %I CEUR-WS.org %B CEUR Workshop Proceedings %N 1181 %U http://ceur-ws.org/Vol-1181/umap2014_lateresults_01.pdf
Athukorala, K., Oulasvirta, A., Glowacka, D., Vreeken, J., and Jaccuci, G. 2014b. Supporting Exploratory Search Through User Modelling. UMAP 2014 Extended Proceedings (PIA 2014 in conjunction with UMAP 2014), CEUR-WS.org.
Export
BibTeX
@inproceedings{atukorala:14:supporting, TITLE = {Supporting Exploratory Search Through User Modelling}, AUTHOR = {Athukorala, Kumaripaba and Oulasvirta, Antti and Glowacka, Dorata and Vreeken, Jilles and Jaccuci, Giulio}, LANGUAGE = {eng}, ISSN = {1613-0073}, URL = {http://ceur-ws.org/Vol-1181/pia2014_paper_04.pdf; urn:nbn:de:0074-1181-4; http://ceur-ws.org/Vol-1181/pia2014_proceedings.pdf}, PUBLISHER = {CEUR-WS.org}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {UMAP 2014 Extended Proceedings (PIA 2014 in conjunction with UMAP 2014)}, EDITOR = {Cantador, Iv{\'a}n and Chi, Min and Farzan, Rosta and J{\"a}schke, Robert}, PAGES = {1--47}, SERIES = {CEUR Workshop Proceedings}, VOLUME = {1181}, ADDRESS = {Aalborg, Denmark}, }
Endnote
%0 Conference Proceedings %A Athukorala, Kumaripaba %A Oulasvirta, Antti %A Glowacka, Dorata %A Vreeken, Jilles %A Jaccuci, Giulio %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Supporting Exploratory Search Through User Modelling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-538C-7 %U http://ceur-ws.org/Vol-1181/pia2014_paper_04.pdf %D 2014 %B Joint Workshop on Personalised Information Access %Z date of event: 2014-07-07 - 2014-07-07 %C Aalborg, Denmark %B UMAP 2014 Extended Proceedings %E Cantador, Iv&#225;n; Chi, Min; Farzan, Rosta; J&#228;schke, Robert %P 1 - 47 %I CEUR-WS.org %B CEUR Workshop Proceedings %N 1181 %@ false %U http://ceur-ws.org/Vol-1181/pia2014_paper_04.pdf
Athukorala, K., Oulasvirta, A., Glowacka, D., Vreeken, J., and Jaccuci, G. 2014c. Narrow or Broad? Estimating Subjective Specificity in Exploratory Search. CIKM’14, 23rd ACM International Conference on Information and Knowledge Management, ACM.
Export
BibTeX
@inproceedings{atukorala:14:foraging, TITLE = {Narrow or Broad? {Estimating} Subjective Specificity in Exploratory Search}, AUTHOR = {Athukorala, Kumaripaba and Oulasvirta, Antti and Glowacka, Dorata and Vreeken, Jilles and Jaccuci, Giulio}, LANGUAGE = {eng}, ISBN = {978-1-4503-2598-1}, DOI = {10.1145/2661829.2661904}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {CIKM'14, 23rd ACM International Conference on Information and Knowledge Management}, EDITOR = {Li, Jianzhong and Wang, X. Sean and Garofalakis, Minos and Soboroff, Ian and Suel, Torsten and Wang, Min}, PAGES = {819--828}, ADDRESS = {Shanghai, China}, }
Endnote
%0 Conference Proceedings %A Athukorala, Kumaripaba %A Oulasvirta, Antti %A Glowacka, Dorata %A Vreeken, Jilles %A Jaccuci, Giulio %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Narrow or Broad? Estimating Subjective Specificity in Exploratory Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-53A1-6 %R 10.1145/2661829.2661904 %D 2014 %B 23rd ACM International Conference on Information and Knowledge Management %Z date of event: 2014-11-03 - 2014-11-07 %C Shanghai, China %B CIKM'14 %E Li, Jianzhong; Wang, X. Sean; Garofalakis, Minos; Soboroff, Ian; Suel, Torsten; Wang, Min %P 819 - 828 %I ACM %@ 978-1-4503-2598-1
Bachynskyi, M., Oulasvirta, A., Palmas, G., and Weinkauf, T. 2014. Is Motion-capture-based Biomechanical Simulation Valid for HCI Studies? Study and Implications. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Abstract
Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further.
Export
BibTeX
@inproceedings{bachynskyi14a, TITLE = {Is Motion-capture-based Biomechanical Simulation Valid for {HCI} Studies? {Study} and Implications}, AUTHOR = {Bachynskyi, Myroslav and Oulasvirta, Antti and Palmas, Gregorio and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, URL = {http://doi.acm.org/10.1145/2556288.2557027}, DOI = {10.1145/2556288.2557027}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further.}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {3215--3224}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Bachynskyi, Myroslav %A Oulasvirta, Antti %A Palmas, Gregorio %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Is Motion-capture-based Biomechanical Simulation Valid for HCI Studies? Study and Implications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D2D-8 %R 10.1145/2556288.2557027 %U http://doi.acm.org/10.1145/2556288.2557027 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %X Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further. %B CHI 2014 %P 3215 - 3224 %I ACM %@ 978-1-4503-2473-1
Bailly, G., Oulasvirta, A., Brumby, D.P., and Howes, A. 2014. Model of Visual Search and Selection Time in Linear Menus. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{bailly2014model, TITLE = {Model of Visual Search and Selection Time in Linear Menus}, AUTHOR = {Bailly, Gilles and Oulasvirta, Antti and Brumby, Duncan P. and Howes, Andrew}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, DOI = {10.1145/2556288.2557093}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {3865--3847}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Bailly, Gilles %A Oulasvirta, Antti %A Brumby, Duncan P. %A Howes, Andrew %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Model of Visual Search and Selection Time in Linear Menus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-C43C-9 %R 10.1145/2556288.2557093 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %B CHI 2014 %P 3865 - 3847 %I ACM %@ 978-1-4503-2473-1
Bergmann, S., Ritschel, T., and Dachsbacher, C. 2014. Interactive Appearance Editing in RGB-D Images. VMV 2014 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{BergmannVMV2014, TITLE = {Interactive Appearance Editing in {RGB-D} Images}, AUTHOR = {Bergmann, Stephan and Ritschel, Tobias and Dachsbacher, Carsten}, LANGUAGE = {eng}, DOI = {10.2312/vmv.20141269}, PUBLISHER = {Eurographics Association}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-10}, BOOKTITLE = {VMV 2014 Vision, Modeling and Visualization}, DEBUG = {author: von Landesberger, Tatiana; author: Theisel, Holger; author: Urban, Philipp}, EDITOR = {Bender, Jan and Kuijper, Arjan}, PAGES = {1--8}, ADDRESS = {Darmstadt, Germany}, }
Endnote
%0 Conference Proceedings %A Bergmann, Stephan %A Ritschel, Tobias %A Dachsbacher, Carsten %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Interactive Appearance Editing in RGB-D Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-533B-C %R 10.2312/vmv.20141269 %D 2014 %B 19th International Workshop on Vision, Modeling and Visualization %Z date of event: 2014-10-08 - 2014-10-10 %C Darmstadt, Germany %B VMV 2014 Vision, Modeling and Visualization %E Bender, Jan; Kuijper, Arjan; von Landesberger, Tatiana; Theisel, Holger; Urban, Philipp %P 1 - 8 %I Eurographics Association
Bozkurt, N. 2014. Interacting with Five Fingernail Displays Using Hand Postures. .
Export
BibTeX
@mastersthesis{BozkurtMastersThesis2014, TITLE = {Interacting with Five Fingernail Displays Using Hand Postures}, AUTHOR = {Bozkurt, Nisa}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Bozkurt, Nisa %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Interacting with Five Fingernail Displays Using Hand Postures : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D88-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master
Brunton, A., Wand, M., Wuhrer, S., Seidel, H.-P., and Weinkauf, T. 2014a. A Low-dimensional Representation for Robust Partial Isometric Correspondences Computation. Graphical Models 76, 2.
Abstract
Intrinsic shape matching has become the standard approach for pose invariant correspondence estimation among deformable shapes. Most existing approaches assume global consistency. While global isometric matching is well understood, only a few heuristic solutions are known for partial matching. Partial matching is particularly important for robustness to topological noise, which is a common problem in real-world scanner data. We introduce a new approach to partial isometric matching based on the observation that isometries are fully determined by local information: a map of a single point and its tangent space fixes an isometry. We develop a new representation for partial isometric maps based on equivalence classes of correspondences between pairs of points and their tangent-spaces. We apply our approach to register partial point clouds and compare it to the state-of-the-art methods, where we obtain significant improvements over global methods for real-world data and stronger guarantees than previous partial matching algorithms.
Export
BibTeX
@article{brunton13, TITLE = {A Low-dimensional Representation for Robust Partial Isometric Correspondences Computation}, AUTHOR = {Brunton, Alan and Wand, Michael and Wuhrer, Stefanie and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {1524-0703}, DOI = {10.1016/j.gmod.2013.11.003}, PUBLISHER = {Academic Press}, ADDRESS = {San Diego, CA}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Intrinsic shape matching has become the standard approach for pose invariant correspondence estimation among deformable shapes. Most existing approaches assume global consistency. While global isometric matching is well understood, only a few heuristic solutions are known for partial matching. Partial matching is particularly important for robustness to topological noise, which is a common problem in real-world scanner data. We introduce a new approach to partial isometric matching based on the observation that isometries are fully determined by local information: a map of a single point and its tangent space fixes an isometry. We develop a new representation for partial isometric maps based on equivalence classes of correspondences between pairs of points and their tangent-spaces. We apply our approach to register partial point clouds and compare it to the state-of-the-art methods, where we obtain significant improvements over global methods for real-world data and stronger guarantees than previous partial matching algorithms.}, JOURNAL = {Graphical Models}, VOLUME = {76}, NUMBER = {2}, PAGES = {70--85}, }
Endnote
%0 Journal Article %A Brunton, Alan %A Wand, Michael %A Wuhrer, Stefanie %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Low-dimensional Representation for Robust Partial Isometric Correspondences Computation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F6E9-5 %R 10.1016/j.gmod.2013.11.003 %7 2013-12-15 %D 2014 %X Intrinsic shape matching has become the standard approach for pose invariant correspondence estimation among deformable shapes. Most existing approaches assume global consistency. While global isometric matching is well understood, only a few heuristic solutions are known for partial matching. Partial matching is particularly important for robustness to topological noise, which is a common problem in real-world scanner data. We introduce a new approach to partial isometric matching based on the observation that isometries are fully determined by local information: a map of a single point and its tangent space fixes an isometry. We develop a new representation for partial isometric maps based on equivalence classes of correspondences between pairs of points and their tangent-spaces. We apply our approach to register partial point clouds and compare it to the state-of-the-art methods, where we obtain significant improvements over global methods for real-world data and stronger guarantees than previous partial matching algorithms. %J Graphical Models %V 76 %N 2 %& 70 %P 70 - 85 %I Academic Press %C San Diego, CA %@ false
Brunton, A., Salazar, A., Bolkart, T., and Wuhrer, S. 2014b. Review of Statistical Shape Spaces for 3D Data with Comparative Analysis for Human Faces. Computer Vision and Image Understanding 128.
Export
BibTeX
@article{BruntonSalazarBolkartWuhrer2014, TITLE = {Review of Statistical Shape Spaces for {3D} Data with Comparative Analysis for Human Faces}, AUTHOR = {Brunton, Alan and Salazar, Augusto and Bolkart, Timo and Wuhrer, Stefanie}, LANGUAGE = {eng}, ISSN = {1077-3142}, DOI = {10.1016/j.cviu.2014.05.005}, PUBLISHER = {Academic Press}, ADDRESS = {San Diego, CA}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Vision and Image Understanding}, VOLUME = {128}, PAGES = {1--17}, }
Endnote
%0 Journal Article %A Brunton, Alan %A Salazar, Augusto %A Bolkart, Timo %A Wuhrer, Stefanie %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Review of Statistical Shape Spaces for 3D Data with Comparative Analysis for Human Faces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C77-8 %F ISI: 000341482400001 %R 10.1016/j.cviu.2014.05.005 %7 2014-05-27 %D 2014 %J Computer Vision and Image Understanding %V 128 %& 1 %P 1 - 17 %I Academic Press %C San Diego, CA %@ false
Dabala, L., Kellnhofer, P., Ritschel, T., et al. 2014. Manipulating Refractive and Reflective Binocular Disparity. Computer Graphics Forum (Proc. EUROGRAPHICS 2014) 33, 2.
Abstract
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.
Export
BibTeX
@article{Kellnhofer2014b, TITLE = {Manipulating Refractive and Reflective Binocular Disparity}, AUTHOR = {Dabala, Lukasz and Kellnhofer, Petr and Ritschel, Tobias and Didyk, Piotr and Templin, Krzysztof and Rokita, Przemyslaw and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12290}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {33}, NUMBER = {2}, PAGES = {53--62}, BOOKTITLE = {EUROGRAPHICS 2014}, EDITOR = {L{\'e}vy, Bruno and Kautz, Jan}, }
Endnote
%0 Journal Article %A Dabala, Lukasz %A Kellnhofer, Petr %A Ritschel, Tobias %A Didyk, Piotr %A Templin, Krzysztof %A Rokita, Przemyslaw %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Manipulating Refractive and Reflective Binocular Disparity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-EEF9-6 %R 10.1111/cgf.12290 %7 2014-06-01 %D 2014 %X Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes. %J Computer Graphics Forum %V 33 %N 2 %& 53 %P 53 - 62 %I Wiley-Blackwell %C Oxford, UK %B EUROGRAPHICS 2014 %O The European Association for Computer Graphics 35th Annual Conference ; Strasbourg, France, April 7th &#8211; 11th, 2014 EUROGRAPHICS 2014 EG 2014
Elek, O., Bauszat, P., Ritschel, T., Magnor, M., and Seidel, H.-P. 2014a. Progressive Spectral Ray Differentials. VMV 2014 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{ElekVMV2014, TITLE = {Progressive Spectral Ray Differentials}, AUTHOR = {Elek, Oskar and Bauszat, Pablo and Ritschel, Tobias and Magnor, Marcus and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-905674-74-3}, PUBLISHER = {Eurographics Association}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {VMV 2014 Vision, Modeling and Visualization}, PAGES = {151--158}, ADDRESS = {Darmstadt, Germany}, }
Endnote
%0 Conference Proceedings %A Elek, Oskar %A Bauszat, Pablo %A Ritschel, Tobias %A Magnor, Marcus %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Progressive Spectral Ray Differentials : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5176-5 %D 2014 %B 19th International Workshop on Vision, Modeling and Visualization %Z date of event: 2014-10-08 - 2014-10-10 %C Darmstadt, Germany %B VMV 2014 Vision, Modeling and Visualization %P 151 - 158 %I Eurographics Association %@ 978-3-905674-74-3
Elek, O., Ritschel, T., Dachsbacher, C., and Seidel, H.-P. 2014b. Interactive Light Scattering with Principal-ordinate Propagation. Graphics Interface 2014, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{ElekGI2014, TITLE = {Interactive Light Scattering with Principal-ordinate Propagation}, AUTHOR = {Elek, Oskar and Ritschel, Tobias and Dachsbacher, Carsten and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4822-6003-8}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Graphics Interface 2014}, EDITOR = {Kry, Paul G. and Bunt, Andrea}, PAGES = {87--94}, ADDRESS = {Montreal, Canada}, }
Endnote
%0 Conference Proceedings %A Elek, Oskar %A Ritschel, Tobias %A Dachsbacher, Carsten %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive Light Scattering with Principal-ordinate Propagation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5181-D %D 2014 %B Graphics Interface %Z date of event: 2014-05-07 - 2014-05-09 %C Montreal, Canada %B Graphics Interface 2014 %E Kry, Paul G.; Bunt, Andrea %P 87 - 94 %I Canadian Information Processing Society %@ 978-1-4822-6003-8 %U http://people.mpi-inf.mpg.de/~oelek/Papers/PrincipalOrdinatePropagation/
Elek, O., Ritschel, T., Dachsbacher, C., and Seidel, H.-P. 2014c. Principal-ordinates Propagation for Real-time Rendering of Participating Media. Computers & Graphics 45.
Export
BibTeX
@article{ElekCAG2014, TITLE = {Principal-ordinates Propagation for Real-time Rendering of Participating Media}, AUTHOR = {Elek, Oskar and Ritschel, Tobias and Dachsbacher, Carsten and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0097-8493}, DOI = {10.1016/j.cag.2014.08.003}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computers \& Graphics}, VOLUME = {45}, PAGES = {28--39}, }
Endnote
%0 Journal Article %A Elek, Oskar %A Ritschel, Tobias %A Dachsbacher, Carsten %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Principal-ordinates Propagation for Real-time Rendering of Participating Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-516D-C %R 10.1016/j.cag.2014.08.003 %7 2014-09-06 %D 2014 %J Computers & Graphics %V 45 %& 28 %P 28 - 39 %I Elsevier %C Amsterdam %@ false
Elek, O., Bauszat, P., Ritschel, T., Magnor, M., and Seidel, H.-P. 2014d. Spectral Ray Differentials. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2014) 33, 4.
Export
BibTeX
@article{Elek2014EGSR, TITLE = {Spectral Ray Differentials}, AUTHOR = {Elek, Oskar and Bauszat, Pablo and Ritschel, Tobias and Magnor, Marcus and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12418}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {33}, NUMBER = {4}, PAGES = {113--122}, BOOKTITLE = {Eurographics Symposium on Rendering 2014}, EDITOR = {Wojciech, Jarosz and Peers, Pieter}, }
Endnote
%0 Journal Article %A Elek, Oskar %A Bauszat, Pablo %A Ritschel, Tobias %A Magnor, Marcus %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Spectral Ray Differentials : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4A77-B %R 10.1111/cgf.12418 %7 2014 %D 2014 %J Computer Graphics Forum %V 33 %N 4 %& 113 %P 113 - 122 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2014 %O Eurographics Symposium on Rendering 2014 EGSR 2014 Lyon, France, June 25th - 27th, 2014
Feit, A.M. and Oulasvirta, A. 2014. PianoText: Redesigning the Piano Keyboard for Text Entry. DIS’14, ACM SIGCHI Conference on Designing Interactive Systems, ACM.
Export
BibTeX
@inproceedings{feit2014pianotext, TITLE = {{PianoText}: {Redesigning} the Piano Keyboard for Text Entry}, AUTHOR = {Feit, Anna Maria and Oulasvirta, Antti}, LANGUAGE = {eng}, ISBN = {978-1-4503-2902-6}, DOI = {10.1145/2598510.2598547}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {DIS'14, ACM SIGCHI Conference on Designing Interactive Systems}, PAGES = {1045--1054}, ADDRESS = {Vancouver, Canada}, }
Endnote
%0 Conference Proceedings %A Feit, Anna Maria %A Oulasvirta, Antti %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T PianoText: Redesigning the Piano Keyboard for Text Entry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-69FC-5 %R 10.1145/2598510.2598547 %D 2014 %B ACM SIGCHI Conference on Designing Interactive Systems %Z date of event: 2014-06-21 - 2014-06-25 %C Vancouver, Canada %B DIS'14 %P 1045 - 1054 %I ACM %@ 978-1-4503-2902-6
Garrido, P., Valgaerts, L., Rehmsen, O., Thormaehlen, T., Peréz, P., and Theobalt, C. 2014. Automatic Face Reenactment. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), IEEE Computer Society.
Export
BibTeX
@inproceedings{Garrido2014, TITLE = {Automatic Face Reenactment}, AUTHOR = {Garrido, Pablo and Valgaerts, Levi and Rehmsen, Ole and Thormaehlen, Thorsten and Per{\'e}z, Patrick and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4799-5117-8}, DOI = {10.1109/CVPR.2014.537}, PUBLISHER = {IEEE Computer Society}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014)}, PAGES = {4217--4224}, ADDRESS = {Columbus, OH, USA}, }
Endnote
%0 Conference Proceedings %A Garrido, Pablo %A Valgaerts, Levi %A Rehmsen, Ole %A Thormaehlen, Thorsten %A Per&#233;z, Patrick %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Automatic Face Reenactment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5155-F %R 10.1109/CVPR.2014.537 %D 2014 %B 2014 IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2014-06-23 - 2014-06-28 %C Columbus, OH, USA %B 2014 IEEE Conference on Computer Vision and Pattern Recognition %P 4217 - 4224 %I IEEE Computer Society %@ 978-1-4799-5117-8
Gong, N.-W., Steimle, J., Olberding, S., et al. 2014. PrintSense: A Versatile Sensing Technique to Support Multimodal Flexible Surface Interaction. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{Gong14, TITLE = {{PrintSense}: a versatile sensing technique to support multimodal flexible surface interaction}, AUTHOR = {Gong, Nan-Wei and Steimle, J{\"u}rgen and Olberding, Simon and Hodges, Steve and Gilllian, Nicholas Edward and Kawahara, Yoshihiro and Paradiso, Joseph A.}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, URL = {http://doi.acm.org/10.1145/2556288.2557239}, DOI = {10.1145/2556288.2557173}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {1407--1410}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Gong, Nan-Wei %A Steimle, J&#252;rgen %A Olberding, Simon %A Hodges, Steve %A Gilllian, Nicholas Edward %A Kawahara, Yoshihiro %A Paradiso, Joseph A. %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T PrintSense: A Versatile Sensing Technique to Support Multimodal Flexible Surface Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFAF-E %R 10.1145/2556288.2557173 %U http://doi.acm.org/10.1145/2556288.2557239 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %B CHI 2014 %P 1407 - 1410 %I ACM %@ 978-1-4503-2473-1
Gryaditskaya, Y., Pouli, T., Reinhard, E., and Seidel, H.-P. 2014. Sky Based Light Metering for High Dynamic Range Images. Computer Graphics Forum (Proc. Pacific Graphics 2014) 33, 7.
Abstract
Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.
Export
BibTeX
@article{CGF:Gryad:14, TITLE = {Sky Based Light Metering for High Dynamic Range Images}, AUTHOR = {Gryaditskaya, Yulia and Pouli, Tania and Reinhard, Erik and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.12474}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel---effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {33}, NUMBER = {7}, PAGES = {61--69}, BOOKTITLE = {22nd Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2014)}, }
Endnote
%0 Journal Article %A Gryaditskaya, Yulia %A Pouli, Tania %A Reinhard, Erik %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Sky Based Light Metering for High Dynamic Range Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C64-1 %R 10.1111/cgf.12474 %7 2014 %D 2014 %X Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel&#8212;effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design. %J Computer Graphics Forum %V 33 %N 7 %& 61 %P 61 - 69 %I Wiley-Blackwell %C Oxford, UK %@ false %B 22nd Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2014 PG 2014 8 to 10 Oct 2014, Seoul, South Korea
Guenther, D., Reininghaus, J., Seidel, H.-P., and Weinkauf, T. 2014. Notes on the Simplification of the Morse-Smale Complex. Topological Methods in Data Analysis and Visualization III (TopoInVis 2013), Springer.
Abstract
The Morse-Smale complex can be either explicitly or implicitly represented. Depending on the type of representation, the simplification of the Morse-Smale complex works differently. In the explicit representation, the Morse-Smale complex is directly simplified by explicitly reconnecting the critical points during the simplification. In the implicit representation, on the other hand, the Morse-Smale complex is given by a combinatorial gradient field. In this setting, the simplification changes the combinatorial flow, which yields an indirect simplification of the Morse-Smale complex. The topological complexity of the Morse-Smale complex is reduced in both representations. However, the simplifications generally yield different results. In this paper, we emphasize the differences between these two representations, and provide a high-level discussion about their advantages and limitations.
Export
BibTeX
@inproceedings{guenther13a, TITLE = {Notes on the Simplification of the {Morse}-{Smale} Complex}, AUTHOR = {Guenther, David and Reininghaus, Jan and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-3-319-04098-1}, DOI = {10.1007/978-3-319-04099-8_9}, PUBLISHER = {Springer}, YEAR = {2013}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {The Morse-Smale complex can be either explicitly or implicitly represented. Depending on the type of representation, the simplification of the Morse-Smale complex works differently. In the explicit representation, the Morse-Smale complex is directly simplified by explicitly reconnecting the critical points during the simplification. In the implicit representation, on the other hand, the Morse-Smale complex is given by a combinatorial gradient field. In this setting, the simplification changes the combinatorial flow, which yields an indirect simplification of the Morse-Smale complex. The topological complexity of the Morse-Smale complex is reduced in both representations. However, the simplifications generally yield different results. In this paper, we emphasize the differences between these two representations, and provide a high-level discussion about their advantages and limitations.}, BOOKTITLE = {Topological Methods in Data Analysis and Visualization III (TopoInVis 2013)}, EDITOR = {Bremer, Peer-Timo and Hotz, Ingrid and Pascucci, Valerio and Peikert, Ronald}, PAGES = {135--150}, SERIES = {Mathematics and Visualization}, ADDRESS = {Davis, CA, USA}, }
Endnote
%0 Conference Proceedings %A Guenther, David %A Reininghaus, Jan %A Seidel, Hans-Peter %A Weinkauf, Tino %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Notes on the Simplification of the Morse-Smale Complex : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-52F3-3 %R 10.1007/978-3-319-04099-8_9 %D 2014 %B TopoInVis %Z date of event: 2013-03-04 - 2013-03-06 %C Davis, CA, USA %X The Morse-Smale complex can be either explicitly or implicitly represented. Depending on the type of representation, the simplification of the Morse-Smale complex works differently. In the explicit representation, the Morse-Smale complex is directly simplified by explicitly reconnecting the critical points during the simplification. In the implicit representation, on the other hand, the Morse-Smale complex is given by a combinatorial gradient field. In this setting, the simplification changes the combinatorial flow, which yields an indirect simplification of the Morse-Smale complex. The topological complexity of the Morse-Smale complex is reduced in both representations. However, the simplifications generally yield different results. In this paper, we emphasize the differences between these two representations, and provide a high-level discussion about their advantages and limitations. %B Topological Methods in Data Analysis and Visualization III %E Bremer, Peer-Timo; Hotz, Ingrid; Pascucci, Valerio; Peikert, Ronald %P 135 - 150 %I Springer %@ 978-3-319-04098-1 %B Mathematics and Visualization
Günther, D., Jacobson, A., Reininghaus, J., Seidel, H.-P., Sorkine-Hornung, O., and Weinkauf, T. 2014a. Fast and Memory-efficient Topological Denoising of 2D and 3D Scalar Fields. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS 2014) 20, 12.
Export
BibTeX
@article{guenther14c, TITLE = {Fast and Memory-efficient Topological Denoising of {2D} and {3D} Scalar Fields}, AUTHOR = {G{\"u}nther, David and Jacobson, Alec and Reininghaus, Jan and Seidel, Hans-Peter and Sorkine-Hornung, Olga and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2014.2346432}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-12}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS)}, VOLUME = {20}, NUMBER = {12}, PAGES = {2585--2594}, BOOKTITLE = {IEEE Visual Analytics Science \& Technology Conference, IEEE Information Visualization Conference, and IEEE Scientific Visualization Conference Proceedings 2014}, DEBUG = {author: Ebert, David; author: Hauser, Helwig; author: Heer, Jeffrey; author: North, Chris; author: Tory, Melanie; author: Qu, Huamin; author: Shen, Han-Wei; author: Ynnerman, Anders}, EDITOR = {Chen, Min}, }
Endnote
%0 Journal Article %A G&#252;nther, David %A Jacobson, Alec %A Reininghaus, Jan %A Seidel, Hans-Peter %A Sorkine-Hornung, Olga %A Weinkauf, Tino %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Fast and Memory-efficient Topological Denoising of 2D and 3D Scalar Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5349-E %R 10.1109/TVCG.2014.2346432 %7 2014 %D 2014 %J IEEE Transactions on Visualization and Computer Graphics %V 20 %N 12 %& 2585 %P 2585 - 2594 %I IEEE Computer Society %C Los Alamitos, CA %@ false %B IEEE Visual Analytics Science & Technology Conference, IEEE Information Visualization Conference, and IEEE Scientific Visualization Conference Proceedings 2014 %O Proceedings 2014 ; Paris, France, 9&#8211;14 November 2014 IEEE VIS 2014
Günther, J. 2014. Ray Tracing of Dynamic Scenes. urn:nbn:de:bsz:291-scidok-59295.
Export
BibTeX
@phdthesis{GuentherPhD2014, TITLE = {Ray Tracing of Dynamic Scenes}, AUTHOR = {G{\"u}nther, Johannes}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59295}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A G&#252;nther, Johannes %Y Slusallek, Philipp %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Ray Tracing of Dynamic Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C0-5 %U urn:nbn:de:bsz:291-scidok-59295 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5929/
Günther, T., Schulze, M., Esturo, J.M., Rössl, C., and Theisel, H. 2014b. Opacity Optimization for Surfaces. Computer Graphics Forum (Proc. EuroVis 2014) 33, 3.
Export
BibTeX
@article{CGF:CGF12357, TITLE = {Opacity Optimization for Surfaces}, AUTHOR = {G{\"u}nther, Tobias and Schulze, Maik and Esturo, Janick Martinez and R{\"o}ssl, Christian and Theisel, Holger}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.12357}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. EuroVis)}, VOLUME = {33}, NUMBER = {3}, PAGES = {11--20}, BOOKTITLE = {Eurographics Conference on Visualization 2014 (EuroVis 2014)}, EDITOR = {Carr, Hamish and Rheingans, Penny and Schumann, Heidrun}, }
Endnote
%0 Journal Article %A G&#252;nther, Tobias %A Schulze, Maik %A Esturo, Janick Martinez %A R&#246;ssl, Christian %A Theisel, Holger %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Opacity Optimization for Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EB80-6 %R 10.1111/cgf.12357 %7 2014-07-12 %D 2014 %K Categories and Subject Descriptors (according to ACM CCS), I.3.3 [Computer Graphics]: Three&#8208;Dimensional Graphics and Realism&#8212;Display Algorithms %J Computer Graphics Forum %V 33 %N 3 %& 11 %P 11 - 20 %I Wiley-Blackwell %C Oxford, UK %@ false %B Eurographics Conference on Visualization 2014 %O EuroVis 2014 Swansea, Wales, UK, June 9 - 13, 2014
Horváth, G., Blahó, M., Egri, A., Hegedüs, R., and Szél, G. 2014a. Circular Polarization Vision of Scarab Beetles. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{2014:AnimalSciences:Hegedues6, TITLE = {Circular Polarization Vision of Scarab Beetles}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Blah{\'o}, M. and Egri, A. and Heged{\"u}s, Ramon and Sz{\'e}l, Gy}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_6}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {147--170}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Blah&#243;, M. %A Egri, A. %A Heged&#252;s, Ramon %A Sz&#233;l, Gy %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Circular Polarization Vision of Scarab Beetles : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-839D-7 %R 10.1007/978-3-642-54718-8_6 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 147 - 170 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Horváth, G., Barta, A., and Hegedüs, R. 2014b. Polarization of the Sky. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{HorvathPolarizationSky2014, TITLE = {Polarization of the Sky}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Barta, Andr{\'a}s and Heged{\"u}s, Ramon}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_18}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {367--406}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Barta, Andr&#225;s %A Heged&#252;s, Ramon %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Polarization of the Sky : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-22D0-C %R 10.1007/978-3-642-54718-8_18 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 367 - 406 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Horváth, G. and Hegedüs, R. 2014a. Polarization Characteristics of Forest Canopies with Biological Implications. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{HorvathPolarization2014, TITLE = {Polarization Characteristics of Forest Canopies with Biological Implications}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Heged{\"u}s, Ramon}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_17}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {345--365}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Heged&#252;s, Ramon %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Polarization Characteristics of Forest Canopies with Biological Implications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-22CE-3 %R 10.1007/978-3-642-54718-8_17 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 345 - 365 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Horváth, G. and Hegedüs, R. 2014b. Polarization-Induced False Colours. In: Polarized Light and Polarization Vision in Animal Sciences. Springer, New York, NY.
Export
BibTeX
@incollection{HorvathColours2014, TITLE = {Polarization-Induced False Colours}, AUTHOR = {Horv{\'a}th, G{\'a}bor and Heged{\"u}s, Ramon}, LANGUAGE = {eng}, ISBN = {978-3-642-54717-1; 978-3-642-54718-8}, DOI = {10.1007/978-3-642-54718-8_13}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, EDITION = {2. ed.}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Polarized Light and Polarization Vision in Animal Sciences}, EDITOR = {Horv{\'a}th, G{\'a}bor}, PAGES = {293--302}, SERIES = {Springer Series in Vision Research}, VOLUME = {2}, }
Endnote
%0 Book Section %A Horv&#225;th, G&#225;bor %A Heged&#252;s, Ramon %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Polarization-Induced False Colours : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-22CC-7 %R 10.1007/978-3-642-54718-8_13 %D 2014 %B Polarized Light and Polarization Vision in Animal Sciences %E Horv&#225;th, G&#225;bor %P 293 - 302 %I Springer %C New York, NY %@ 978-3-642-54717-1 978-3-642-54718-8 %S Springer Series in Vision Research %N 2
Ihrke, I. 2014. Opacity. In: Computer Vision. Springer, Berlin.
Export
BibTeX
@incollection{Ihrke2011, TITLE = {Opacity}, AUTHOR = {Ihrke, Ivo}, LANGUAGE = {eng}, ISBN = {978-0-387-30771-8}, DOI = {10.1007/978-0-387-31439-6_564}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Computer Vision}, PAGES = {562--564}, }
Endnote
%0 Book Section %A Ihrke, Ivo %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Opacity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-2556-A %R 10.1007/978-0-387-31439-6_564 %D 2014 %B Computer Vision %P 562 - 564 %I Springer %C Berlin %@ 978-0-387-30771-8
Jain, A. 2014. Data-driven Methods for Interactive Visual Content Creation and Manipulation. urn:nbn:de:bsz:291-scidok-58210.
Export
BibTeX
@phdthesis{PhDThesis:JainArjun, TITLE = {Data-driven Methods for Interactive Visual Content Creation and Manipulation}, AUTHOR = {Jain, Arjun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58210}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Jain, Arjun %Y Thorm&#228;hlen, Thorsten %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Data-driven Methods for Interactive Visual Content Creation and Manipulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EB82-2 %U urn:nbn:de:bsz:291-scidok-58210 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P XV, 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5821/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Karrenbauer, A. and Oulasvirta, A. 2014. Improvements to Keyboard Optimization with Integer Programming. UIST’14, 27th Annual ACM Symposium on User Interface Software and Technology, ACM.
Export
BibTeX
@inproceedings{KO2014, TITLE = {Improvements to Keyboard Optimization with Integer Programming}, AUTHOR = {Karrenbauer, Andreas and Oulasvirta, Antti}, LANGUAGE = {eng}, ISBN = {978-1-4503-3069-5}, DOI = {10.1145/2642918.2647382}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {UIST'14, 27th Annual ACM Symposium on User Interface Software and Technology}, EDITOR = {Benko, Hrvoje and Dontcheva, Mira and Wigdor, Daniel}, PAGES = {621--626}, ADDRESS = {Honolulu, HI, USA}, }
Endnote
%0 Conference Proceedings %A Karrenbauer, Andreas %A Oulasvirta, Antti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Improvements to Keyboard Optimization with Integer Programming : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-43F4-B %R 10.1145/2642918.2647382 %D 2014 %B 27th Annual ACM Symposium on User Interface Software and Technology %Z date of event: 2014-10-05 - 2014-10-08 %C Honolulu, HI, USA %B UIST'14 %E Benko, Hrvoje; Dontcheva, Mira; Wigdor, Daniel %P 621 - 626 %I ACM %@ 978-1-4503-3069-5
Kawahara, Y., Hodges, S., Olberding, S., Steimle, J., and Gong, N.-W. 2014. Building Functional Prototypes Using Conductive Inkjet Printing. IEEE Pervasive Computing 13, 3.
Export
BibTeX
@article{6850258, TITLE = {Building Functional Prototypes Using Conductive Inkjet Printing}, AUTHOR = {Kawahara, Yoshihiro and Hodges, Steve and Olberding, Simon and Steimle, J{\"u}rgen and Gong, Nan-Wei}, LANGUAGE = {eng}, ISSN = {1536-1268}, DOI = {10.1109/MPRV.2014.41}, PUBLISHER = {IEEE}, ADDRESS = {Piscataway, NJ}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {IEEE Pervasive Computing}, VOLUME = {13}, NUMBER = {3}, PAGES = {30--38}, }
Endnote
%0 Journal Article %A Kawahara, Yoshihiro %A Hodges, Steve %A Olberding, Simon %A Steimle, J&#252;rgen %A Gong, Nan-Wei %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Building Functional Prototypes Using Conductive Inkjet Printing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFA6-F %R 10.1109/MPRV.2014.41 %7 2014 %D 2014 %K flexible electronics;ink jet printing;printed circuit manufacture;3D printers;conductive circuits;conductive inkjet printing process;consumer-grade inkjet printer;custom-made subcircuits;electronic circuits;fabrication techniques;flexible substrate;functional device prototypes;off-the-shelf electronic components;pervasive computing;printed conductive patterns;prototyping mechanical structures;proximity-sensitive surfaces;single wiring layer;touch-sensitive surfaces;Capacitive sensors;Digital systems;Electronic equipment;Fabrication;Ink jet printing;Printers;Resistance;Substrates;Virtual manufacturing;capacitive sensors;conductive ink;digital fabrication;inkjet printing;pervasive computing;rapid prototyping;touch sensing %J IEEE Pervasive Computing %V 13 %N 3 %& 30 %P 30 - 38 %I IEEE %C Piscataway, NJ %@ false
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2014a. Improving Perception of Binocular Stereo Motion on 3D Display Devices. Stereoscopic Displays and Applications XXV, SPIE.
Abstract
This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations.
Export
BibTeX
@inproceedings{Kellnhofer2014a, TITLE = {Improving Perception of Binocular Stereo Motion on {3D} Display Devices}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0277-786X}, ISBN = {9780819499288}, DOI = {10.1117/12.2032389}, PUBLISHER = {SPIE}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations.}, BOOKTITLE = {Stereoscopic Displays and Applications XXV}, EDITOR = {Woods, Andrew J. and Holliman, Nicolas S. and Favalora, Gregg E.}, PAGES = {1--11}, EID = {901116}, SERIES = {Proceedings of SPIE-IS\&T Electronic Imaging}, VOLUME = {9011}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Improving Perception of Binocular Stereo Motion on 3D Display Devices : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-318D-7 %R 10.1117/12.2032389 %D 2014 %B Stereoscopic Displays and Applications XXV %Z date of event: 2014-02-03 - 2014-02-05 %C San Francisco, CA, USA %X This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations. %B Stereoscopic Displays and Applications XXV %E Woods, Andrew J.; Holliman, Nicolas S.; Favalora, Gregg E. %P 1 - 11 %Z sequence number: 901116 %I SPIE %@ 9780819499288 %B Proceedings of SPIE-IS&T Electronic Imaging %N 9011 %@ false
Kellnhofer, P., Ritschel, T., Vangorp, P., Myszkowski, K., and Seidel, H.-P. 2014b. Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision. ACM Transactions on Applied Perception 11, 3.
Export
BibTeX
@article{kellnhofer:2014c:DarkStereo, TITLE = {Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Vangorp, Peter and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1544-3558}, DOI = {10.1145/2644813}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Applied Perception}, VOLUME = {11}, NUMBER = {3}, EID = {15}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Vangorp, Peter %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EE0E-E %R 10.1145/2644813 %7 2014 %D 2014 %J ACM Transactions on Applied Perception %V 11 %N 3 %Z sequence number: 15 %I ACM %C New York, NY %@ false
Khattab, D., Theobalt, C., Hussein, A.S., and Tolba, M.F. 2014. Modified GrabCut for Human Face Segmentation. Ain Shams Engineering Journal 5, 4.
Abstract
Abstract GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.
Export
BibTeX
@article{Khattab20141083, TITLE = {Modified {GrabCut} for Human Face Segmentation}, AUTHOR = {Khattab, Dina and Theobalt, Christian and Hussein, Ashraf S. and Tolba, Mohamed F.}, LANGUAGE = {eng}, ISSN = {2090-4479}, DOI = {10.1016/j.asej.2014.04.012}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Abstract GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.}, JOURNAL = {Ain Shams Engineering Journal}, VOLUME = {5}, NUMBER = {4}, PAGES = {1083--1091}, }
Endnote
%0 Journal Article %A Khattab, Dina %A Theobalt, Christian %A Hussein, Ashraf S. %A Tolba, Mohamed F. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Modified GrabCut for Human Face Segmentation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF83-F %R 10.1016/j.asej.2014.04.012 %7 2014 %D 2014 %X Abstract GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation. %K Image segmentation %J Ain Shams Engineering Journal %V 5 %N 4 %& 1083 %P 1083 - 1091 %I Elsevier %C Amsterdam %@ false %U http://www.sciencedirect.com/science/article/pii/S2090447914000562
Kim, K.I., Tompkin, J., and Theobalt, C. 2014. Local High-order Regularization on Data Manifolds. Max-Planck Institut für Informatik, Saarbrücken.
Export
BibTeX
@techreport{KimTR2014, TITLE = {Local High-order Regularization on Data Manifolds}, AUTHOR = {Kim, Kwang In and Tompkin, James and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2014-4-001}, INSTITUTION = {Max-Planck Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, TYPE = {Research Report}, }
Endnote
%0 Report %A Kim, Kwang In %A Tompkin, James %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Local High-order Regularization on Data Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-B210-7 %Y Max-Planck Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2014 %P 12 p. %B Research Report %@ false
Klehm, O., Ihrke, I., Seidel, H.-P., and Eisemann, E. 2014a. Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor. IEEE Transactions on Visualization and Computer Graphics 20, 7.
Export
BibTeX
@article{PLM-tvcg_Klehm2014, TITLE = {Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor}, AUTHOR = {Klehm, Oliver and Ihrke, Ivo and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2014.13}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-07}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics}, VOLUME = {20}, NUMBER = {7}, PAGES = {983--995}, }
Endnote
%0 Journal Article %A Klehm, Oliver %A Ihrke, Ivo %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-51CA-B %R 10.1109/TVCG.2014.13 %7 2014 %D 2014 %K rendering (computer graphics);artistic control;environmental lighting;image component;lighting manipulations;noise function parameters;painting metaphor;property manipulations;realistic rendering;static volume stylization;static volumes;tomographic reconstruction;volume appearance;volume properties;volumetric rendering equation;Equations;Image reconstruction;Lighting;Mathematical model;Optimization;Rendering (computer graphics);Scattering;Artist control;optimization;participating media %J IEEE Transactions on Visualization and Computer Graphics %V 20 %N 7 %& 983 %P 983 - 995 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Klehm, O., Seidel, H.-P., and Eisemann, E. 2014b. Filter-based Real-time Single Scattering using Rectified Shadow Maps. Journal of Computer Graphics Techniques 3, 3.
Export
BibTeX
@article{fbss_jcgtKlehm2014, TITLE = {Filter-based Real-time Single Scattering using Rectified Shadow Maps}, AUTHOR = {Klehm, Oliver and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISSN = {2331-7418}, URL = {http://jcgt.org/published/0003/03/02/}, PUBLISHER = {Williams College}, ADDRESS = {Williamstown, MA}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-08}, JOURNAL = {Journal of Computer Graphics Techniques}, VOLUME = {3}, NUMBER = {3}, PAGES = {7--34}, }
Endnote
%0 Journal Article %A Klehm, Oliver %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Filter-based Real-time Single Scattering using Rectified Shadow Maps : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-51B3-E %U http://jcgt.org/published/0003/03/02/ %7 2014 %D 2014 %J Journal of Computer Graphics Techniques %O JCGT %V 3 %N 3 %& 7 %P 7 - 34 %I Williams College %C Williamstown, MA %@ false %U http://jcgt.org/published/0003/03/02/
Klehm, O., Seidel, H.-P., and Eisemann, E. 2014c. Prefiltered Single Scattering. Proceedings I3D 2014, ACM.
Export
BibTeX
@inproceedings{Klehm:2014:PSS:2556700.2556704, TITLE = {Prefiltered Single Scattering}, AUTHOR = {Klehm, Oliver and Seidel, Hans-Peter and Eisemann, Elmar}, LANGUAGE = {eng}, ISBN = {978-1-4503-2717-6}, DOI = {10.1145/2556700.2556704}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Proceedings I3D 2014}, EDITOR = {Keyser, John and Sander, Pedro}, PAGES = {71--78}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Klehm, Oliver %A Seidel, Hans-Peter %A Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Prefiltered Single Scattering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-51C5-6 %R 10.1145/2556700.2556704 %D 2014 %B 18th Meeting of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games %Z date of event: 2014-03-14 - 2014-03-16 %C San Francisco, CA, USA %K participating media, scattering, shadow test %B Proceedings I3D 2014 %E Keyser, John; Sander, Pedro %P 71 - 78 %I ACM %@ 978-1-4503-2717-6
Konz, V. and Schuricht, F. 2014. Contact with a Corner for Nonlinearly Elastic Rods. Journal of Elasticity 117, 1.
Export
BibTeX
@article{KonzSchuricht2014, TITLE = {Contact with a Corner for Nonlinearly Elastic Rods}, AUTHOR = {Konz, Verena and Schuricht, Friedemann}, LANGUAGE = {eng}, ISSN = {0374-3535}, DOI = {10.1007/s10659-013-9462-1}, PUBLISHER = {Springer}, ADDRESS = {Dordrecht}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Journal of Elasticity}, VOLUME = {117}, NUMBER = {1}, PAGES = {1--20}, }
Endnote
%0 Journal Article %A Konz, Verena %A Schuricht, Friedemann %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Contact with a Corner for Nonlinearly Elastic Rods : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C88-2 %F ISI: 000341864200001 %R 10.1007/s10659-013-9462-1 %7 2013 %D 2014 %J Journal of Elasticity %V 117 %N 1 %& 1 %P 1 - 20 %I Springer %C Dordrecht %@ false
Kozlov, Y. 2014. Analysis of Energy Regularization for Harmonic Surface Deformation. .
Abstract
Recently it has been shown that regularization can be beneficial for a variety of geometry processing methods on discretized domains. Linear energy regularization, proposed by Martinez Esturo et al. [MRT14], creates a global, linear regularization term which is strongly coupled with the deformation energy. It can be computed interactively, with little impact on runtime. This work analyzes the effects of linear energy regularization on harmonic surface deformation, proposed by Zayer et al. [ZRKS05]. Harmonic surface deformation is a variational technique for gradient domain surface manipulation. This work demonstrate that linear energy regularization can overcome some of the inherent limitations associated with this technique, can effectively reduce common artifacts associated with this method, eliminating the need for costly non-linear regularization, and expanding the modeling capabilities for harmonic surface deformation.
Export
BibTeX
@mastersthesis{Kozlov2014, TITLE = {Analysis of Energy Regularization for Harmonic Surface Deformation}, AUTHOR = {Kozlov, Yeara}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Recently it has been shown that regularization can be beneficial for a variety of geometry processing methods on discretized domains. Linear energy regularization, proposed by Martinez Esturo et al. [MRT14], creates a global, linear regularization term which is strongly coupled with the deformation energy. It can be computed interactively, with little impact on runtime. This work analyzes the effects of linear energy regularization on harmonic surface deformation, proposed by Zayer et al. [ZRKS05]. Harmonic surface deformation is a variational technique for gradient domain surface manipulation. This work demonstrate that linear energy regularization can overcome some of the inherent limitations associated with this technique, can effectively reduce common artifacts associated with this method, eliminating the need for costly non-linear regularization, and expanding the modeling capabilities for harmonic surface deformation.}, }
Endnote
%0 Thesis %A Kozlov, Yeara %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Analysis of Energy Regularization for Harmonic Surface Deformation : %U http://hdl.handle.net/11858/00-001M-0000-001A-34CB-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master %X Recently it has been shown that regularization can be beneficial for a variety of geometry processing methods on discretized domains. Linear energy regularization, proposed by Martinez Esturo et al. [MRT14], creates a global, linear regularization term which is strongly coupled with the deformation energy. It can be computed interactively, with little impact on runtime. This work analyzes the effects of linear energy regularization on harmonic surface deformation, proposed by Zayer et al. [ZRKS05]. Harmonic surface deformation is a variational technique for gradient domain surface manipulation. This work demonstrate that linear energy regularization can overcome some of the inherent limitations associated with this technique, can effectively reduce common artifacts associated with this method, eliminating the need for costly non-linear regularization, and expanding the modeling capabilities for harmonic surface deformation.
Kozlov, Y., Esturo, J.M., Seidel, H.-P., and Weinkauf, T. 2014. Regularized Harmonic Surface Deformation. http://arxiv.org/abs/1408.3326.
(arXiv: 1408.3326)
Abstract
Harmonic surface deformation is a well-known geometric modeling method that creates plausible deformations in an interactive manner. However, this method is susceptible to artifacts, in particular close to the deformation handles. These artifacts often correlate with strong gradients of the deformation energy.In this work, we propose a novel formulation of harmonic surface deformation, which incorporates a regularization of the deformation energy. To do so, we build on and extend a recently introduced generic linear regularization approach. It can be expressed as a change of norm for the linear optimization problem, i.e., the regularization is baked into the optimization. This minimizes the implementation complexity and has only a small impact on runtime. Our results show that a moderate use of regularization suppresses many deformation artifacts common to the well-known harmonic surface deformation method, without introducing new artifacts.
Export
BibTeX
@online{kozlov14, TITLE = {Regularized Harmonic Surface Deformation}, AUTHOR = {Kozlov, Yeara and Esturo, Janick Martinez and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1408.3326}, EPRINT = {1408.3326}, EPRINTTYPE = {arXiv}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Harmonic surface deformation is a well-known geometric modeling method that creates plausible deformations in an interactive manner. However, this method is susceptible to artifacts, in particular close to the deformation handles. These artifacts often correlate with strong gradients of the deformation energy.In this work, we propose a novel formulation of harmonic surface deformation, which incorporates a regularization of the deformation energy. To do so, we build on and extend a recently introduced generic linear regularization approach. It can be expressed as a change of norm for the linear optimization problem, i.e., the regularization is baked into the optimization. This minimizes the implementation complexity and has only a small impact on runtime. Our results show that a moderate use of regularization suppresses many deformation artifacts common to the well-known harmonic surface deformation method, without introducing new artifacts.}, }
Endnote
%0 Report %A Kozlov, Yeara %A Esturo, Janick Martinez %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Regularized Harmonic Surface Deformation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-49F5-A %U http://arxiv.org/abs/1408.3326 %D 2014 %X Harmonic surface deformation is a well-known geometric modeling method that creates plausible deformations in an interactive manner. However, this method is susceptible to artifacts, in particular close to the deformation handles. These artifacts often correlate with strong gradients of the deformation energy.In this work, we propose a novel formulation of harmonic surface deformation, which incorporates a regularization of the deformation energy. To do so, we build on and extend a recently introduced generic linear regularization approach. It can be expressed as a change of norm for the linear optimization problem, i.e., the regularization is baked into the optimization. This minimizes the implementation complexity and has only a small impact on runtime. Our results show that a moderate use of regularization suppresses many deformation artifacts common to the well-known harmonic surface deformation method, without introducing new artifacts. %K Computer Science, Graphics, cs.GR
Kurz, C., Wu, X., Wand, M., Thormählen, T., Kohli, P., and Seidel, H.-P. 2014. Symmetry-aware Template Deformation and Fitting. Computer Graphics Forum 33, 6.
Export
BibTeX
@article{Kurz2014, TITLE = {Symmetry-aware Template Deformation and Fitting}, AUTHOR = {Kurz, Christian and Wu, Xiaokun and Wand, Michael and Thorm{\"a}hlen, Thorsten and Kohli, P. and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12344}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum}, VOLUME = {33}, NUMBER = {6}, PAGES = {205--219}, }
Endnote
%0 Journal Article %A Kurz, Christian %A Wu, Xiaokun %A Wand, Michael %A Thorm&#228;hlen, Thorsten %A Kohli, P. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Symmetry-aware Template Deformation and Fitting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D2B-D %R 10.1111/cgf.12344 %7 2014-03-20 %D 2014 %J Computer Graphics Forum %V 33 %N 6 %& 205 %P 205 - 219 %I Wiley-Blackwell %C Oxford
Kurz, C. 2014. Constrained Camera Motion Estimation and 3D Reconstruction. urn:nbn:de:bsz:291-scidok-59439.
Export
BibTeX
@phdthesis{KurzPhD2014, TITLE = {Constrained Camera Motion Estimation and {3D} Reconstruction}, AUTHOR = {Kurz, Christian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59439}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Kurz, Christian %Y Seidel, Hans-Peter %A referee: Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Constrained Camera Motion Estimation and 3D Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C2-1 %U urn:nbn:de:bsz:291-scidok-59439 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5943/
Levinkov, E. 2014. Scene Segmentation in Adverse Vision Conditions. Pattern Recognition (GCPR 2014), Springer.
Export
BibTeX
@inproceedings{882, TITLE = {Scene Segmentation in Adverse Vision Conditions}, AUTHOR = {Levinkov, Evgeny}, LANGUAGE = {eng}, ISBN = {978-3-319-11751-5}, DOI = {10.1007/978-3-319-11752-2_64}, PUBLISHER = {Springer}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-09}, BOOKTITLE = {Pattern Recognition (GCPR 2014)}, EDITOR = {Jiang, Xiaoyi and Hornegger, Joachim and Koch, Reinhard}, PAGES = {750--756}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {8753}, ADDRESS = {M{\"u}nster, Germany}, }
Endnote
%0 Conference Proceedings %A Levinkov, Evgeny %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Scene Segmentation in Adverse Vision Conditions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4CD4-5 %R 10.1007/978-3-319-11752-2_64 %D 2014 %B 36th German Conference on Pattern Recognition %Z date of event: 2014-09-02 - 2014-09-05 %C M&#252;nster, Germany %B Pattern Recognition %E Jiang, Xiaoyi; Hornegger, Joachim; Koch, Reinhard %P 750 - 756 %I Springer %@ 978-3-319-11751-5 %B Lecture Notes in Computer Science %N 8753
Lissermann, R., Huber, J., Schmitz, M., Steimle, J., and Mühlhäusler, M. 2014. Permulin: Mixed-focus Collaboration on Multi-view Tabletops. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Export
BibTeX
@inproceedings{Lissermann14, TITLE = {Permulin: {M}ixed-focus collaboration on multi-view tabletops}, AUTHOR = {Lissermann, Roman and Huber, Jochen and Schmitz, Martin and Steimle, J{\"u}rgen and M{\"u}hlh{\"a}usler, Max}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, DOI = {10.1145/2556288.2557405}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {3191--3200}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Lissermann, Roman %A Huber, Jochen %A Schmitz, Martin %A Steimle, J&#252;rgen %A M&#252;hlh&#228;usler, Max %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Permulin: Mixed-focus Collaboration on Multi-view Tabletops : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFC1-2 %R 10.1145/2556288.2557405 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %B CHI 2014 %P 3191 - 3200 %I ACM %@ 978-1-4503-2473-1
Liu, Y., Ye, G., Wang, Y., Dai, Q., and Theobalt, C. 2014. Human Performance Capture Using Multiple Handheld Kinects. In: Springer, Berlin.
Export
BibTeX
@incollection{theobalt2014, TITLE = {Human Performance Capture Using Multiple Handheld Kinects}, AUTHOR = {Liu, Yebin and Ye, Genzhi and Wang, Yangang and Dai, Qionghai and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {2191-6586}, DOI = {10.1007/978-3-319-08651-4_5}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, SERIES = {Advances in Computer Vision and Pattern Recognition}, EDITOR = {Shao, Ling and Han, Jungong and Kohli, Pushmeet and Zhang, Zhengyou}, PAGES = {91--108}, }
Endnote
%0 Book Section %A Liu, Yebin %A Ye, Genzhi %A Wang, Yangang %A Dai, Qionghai %A Theobalt, Christian %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Human Performance Capture Using Multiple Handheld Kinects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF67-F %R 10.1007/978-3-319-08651-4_5 %D 2014 %S Advances in Computer Vision and Pattern Recognition %P 91 - 108 %I Springer %C Berlin %@ false
Lochmann, G., Reinert, B., Ritschel, T., Müller, S., and Seidel, H.-P. 2014. Real‐time Reflective and Refractive Novel‐view Synthesis. VMV 2014 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{LochmannVMV2014, TITLE = {Real-time Reflective and Refractive Novel-view Synthesis}, AUTHOR = {Lochmann, Gerrit and Reinert, Bernhard and Ritschel, Tobias and M{\"u}ller, Stefan and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.2312/vmv.20141270}, PUBLISHER = {Eurographics Association}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {VMV 2014 Vision, Modeling and Visualization}, EDITOR = {Bender, Jan and Kuijper, Arjan and Landesberger, Tatiana and Theisel, Holger and Urban, Philipp}, PAGES = {9--16}, ADDRESS = {Darmstadt, Germany}, }
Endnote
%0 Conference Proceedings %A Lochmann, Gerrit %A Reinert, Bernhard %A Ritschel, Tobias %A M&#252;ller, Stefan %A Seidel, Hans-Peter %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real&#8208;time Reflective and Refractive Novel&#8208;view Synthesis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-533E-6 %R 10.2312/vmv.20141270 %D 2014 %B 19th International Workshop on Vision, Modeling and Visualization %Z date of event: 2014-10-08 - 2014-10-10 %C Darmstadt, Germany %B VMV 2014 Vision, Modeling and Visualization %E Bender, Jan; Kuijper, Arjan; Landesberger, Tatiana; Theisel, Holger; Urban, Philipp %P 9 - 16 %I Eurographics Association %U http://dx.doi.org/10.2312/vmv.20141270
Martinez Esturo, J., Rössl, C., and Theisel, H. 2014a. Generalized Metric Energies for Continuous Shape Deformation. Mathematical Methods for Curves and Surfaces (MMCS 2012), Springer.
Export
BibTeX
@inproceedings{MartinezEsturo2013a, TITLE = {Generalized Metric Energies for Continuous Shape Deformation}, AUTHOR = {Martinez Esturo, Janick and R{\"o}ssl, Christian and Theisel, Holger}, LANGUAGE = {eng}, ISSN = {0302-9743}, ISBN = {978-3-642-5438}, DOI = {10.1007/978-3-642-54382-1_8}, LOCALID = {Local-ID: 55311F3F18B94846C1257C600056A2E5-MartinezEsturo2013a}, PUBLISHER = {Springer}, YEAR = {2012}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Mathematical Methods for Curves and Surfaces (MMCS 2012)}, EDITOR = {Floater, Michael and Lyche, Tom and Mazure, Marie-Laurence and M{\o}rken, Knut and Schumaker, Larry L.}, ISSUE = {1}, PAGES = {135--157}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {8177}, ADDRESS = {Oslo, Norway}, }
Endnote
%0 Conference Proceedings %A Martinez Esturo, Janick %A R&#246;ssl, Christian %A Theisel, Holger %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Generalized Metric Energies for Continuous Shape Deformation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1CC8-C %R 10.1007/978-3-642-54382-1_8 %F OTHER: Local-ID: 55311F3F18B94846C1257C600056A2E5-MartinezEsturo2013a %D 2014 %B 8th International Conference on Mathematical Methods for Curves and Surfaces %Z date of event: 2012-06-28 - 2012-07-03 %C Oslo, Norway %B Mathematical Methods for Curves and Surfaces %E Floater, Michael; Lyche, Tom; Mazure, Marie-Laurence; M&#248;rken, Knut; Schumaker, Larry L. %N 1 %P 135 - 157 %I Springer %@ 978-3-642-5438 %B Lecture Notes in Computer Science %N 8177 %@ false
Martinez Esturo, J., Rössl, C., and Theisel, H. 2014b. Smoothed Quadratic Energies on Meshes. ACM Transactions on Graphics 34, 1.
Export
BibTeX
@article{MartinezEsturo2014, TITLE = {Smoothed Quadratic Energies on Meshes}, AUTHOR = {Martinez Esturo, Janick and R{\"o}ssl, Christian and Theisel, Holger}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2682627}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {34}, NUMBER = {1}, EID = {2}, }
Endnote
%0 Journal Article %A Martinez Esturo, Janick %A R&#246;ssl, Christian %A Theisel, Holger %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Smoothed Quadratic Energies on Meshes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-258B-3 %R 10.1145/2682627 %7 2014 %D 2014 %J ACM Transactions on Graphics %V 34 %N 1 %Z sequence number: 2 %I ACM %C New York, NY %@ false
Metha, V. 2014. An Empirical Study of How People Use Skin as an Input Surface for Mobile and Wearable Computing. .
Export
BibTeX
@mastersthesis{MethaMastersThesis2014, TITLE = {An Empirical Study of How People Use Skin as an Input Surface for Mobile and Wearable Computing}, AUTHOR = {Metha, Vikram}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Metha, Vikram %+ Computer Graphics, MPI for Informatics, Max Planck Society %T An Empirical Study of How People Use Skin as an Input Surface for Mobile and Wearable Computing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D8C-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master
Nalbach, O., Ritschel, T., and Seidel, H.-P. 2014a. Deep Screen Space for Indirect Lighting of Volumes. VMV 2014 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{DBLP:conf/vmv/NalbachRS14, TITLE = {Deep Screen Space for Indirect Lighting of Volumes}, AUTHOR = {Nalbach, Oliver and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-905674-74-3}, DOI = {10.2312/vmv.20141287}, PUBLISHER = {Eurographics Association}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {VMV 2014 Vision, Modeling and Visualization}, EDITOR = {Bender, Jan and Kuijper, Arjan and von Landesberger, Tatiana and Theisel, Holger and Urban, Philipp}, PAGES = {143--150}, ADDRESS = {Darmstadt, Germany}, }
Endnote
%0 Conference Proceedings %A Nalbach, Oliver %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Deep Screen Space for Indirect Lighting of Volumes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D6C-B %R 10.2312/vmv.20141287 %D 2014 %B 19th International Workshop on Vision, Modeling and Visualization %Z date of event: 2014-10-08 - 2014-10-10 %C Darmstadt, Germany %B VMV 2014 Vision, Modeling and Visualization %E Bender, Jan; Kuijper, Arjan; von Landesberger, Tatiana; Theisel, Holger; Urban, Philipp %P 143 - 150 %I Eurographics Association %@ 978-3-905674-74-3 %U http://dx.doi.org/10.2312/vmv.20141287
Nalbach, O., Ritschel, T., and Seidel, H.-P. 2014b. Deep Screen Space. Proceedings I3D 2014, ACM.
Export
BibTeX
@inproceedings{Nalbach:2014:DSS:2556700.2556708, TITLE = {Deep Screen Space}, AUTHOR = {Nalbach, Oliver and Ritschel, Tobias and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4503-2717-6}, URL = {http://doi.acm.org/10.1145/2556700.2556708}, DOI = {10.1145/2556700.2556708}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Proceedings I3D 2014}, EDITOR = {Keyser, John and Sander, Pedro}, PAGES = {79--86}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Nalbach, Oliver %A Ritschel, Tobias %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Deep Screen Space : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D74-8 %R 10.1145/2556700.2556708 %U http://doi.acm.org/10.1145/2556700.2556708 %D 2014 %B 18th Meeting of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games %Z date of event: 2014-03-14 - 2014-03-16 %C San Francisco, CA, USA %B Proceedings I3D 2014 %E Keyser, John; Sander, Pedro %P 79 - 86 %I ACM %@ 978-1-4503-2717-6
Neumann, T., Varanasi, K., Theobalt, C., Magnor, M., and Wacker, M. 2014. Compressed Manifold Modes for Mesh Processing. Computer Graphics Forum (Proc. Eurographics Symposium on Geometry Processing 2014) 33, 5.
Export
BibTeX
@article{NeumannVaranasiSGP2014, TITLE = {Compressed Manifold Modes for Mesh Processing}, AUTHOR = {Neumann, Thomas and Varanasi, Kiran and Theobalt, Christian and Magnor, Marcus and Wacker, Markus}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12429}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Geometry Processing)}, VOLUME = {33}, NUMBER = {5}, PAGES = {35--44}, BOOKTITLE = {Eurographics Symposium on Geometry Processing}, }
Endnote
%0 Journal Article %A Neumann, Thomas %A Varanasi, Kiran %A Theobalt, Christian %A Magnor, Marcus %A Wacker, Markus %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Compressed Manifold Modes for Mesh Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-7FCD-4 %R 10.1111/cgf.12429 %7 2014 %D 2014 %J Computer Graphics Forum %V 33 %N 5 %& 35 %P 35 - 44 %I Wiley-Blackwell %C Oxford, UK %@ false %B Eurographics Symposium on Geometry Processing %O SGP 2014 Eurographics Symposium on Geometry Processing 2014
Olberding, S., Wessely, M., and Steimle, J. 2014. PrintScreen: Fabricating Highly Customizable Thin-film Touch-displays. UIST’14, 27th Annual ACM Symposium on User Interface Software and Technology, ACM.
Export
BibTeX
@inproceedings{Olberding:2014:PFH:2642918.2647413, TITLE = {{PrintScreen}: Fabricating Highly Customizable Thin-film Touch-displays}, AUTHOR = {Olberding, Simon and Wessely, Michael and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-3069-5}, DOI = {10.1145/2642918.2647413}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {UIST'14, 27th Annual ACM Symposium on User Interface Software and Technology}, EDITOR = {Benko, Hrvoje and Dontcheva, Mira and Wigdor, Daniel}, PAGES = {281--290}, ADDRESS = {Honolulu, HI, USA}, }
Endnote
%0 Conference Proceedings %A Olberding, Simon %A Wessely, Michael %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T PrintScreen: Fabricating Highly Customizable Thin-film Touch-displays : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9347-E %R 10.1145/2642918.2647413 %D 2014 %B 27th Annual ACM Symposium on User Interface Software and Technology %Z date of event: 2014-10-05 - 2014-10-08 %C Honolulu, HI, USA %K digital fabrication, electroluminescence, flexible display, printed electronics, rapid prototyping, tfel, thin-film display, touch input, ubiquitous computing. %B UIST'14 %E Benko, Hrvoje; Dontcheva, Mira; Wigdor, Daniel %P 281 - 290 %I ACM %@ 978-1-4503-3069-5
Ou, J., Yao, L., Tauber, D., Steimle, J., Niiyama, R., and Ishii, H. 2014. jamSheets: Thin Interfaces with Tunable Stiffness Enabled by Layer Jamming. EighthInternational Conference on Tangible, Embedded and Embodied Interaction (TEI 2014), ACM.
Export
BibTeX
@inproceedings{Ou13, TITLE = {{jamSheets}: {Thin} Interfaces with Tunable Stiffness Enabled by Layer Jamming}, AUTHOR = {Ou, Jifei and Yao, Lining and Tauber, Daniel and Steimle, J{\"u}rgen and Niiyama, Ryuma and Ishii, Hiroshi}, LANGUAGE = {eng}, ISBN = {978-1-4503-2635-3}, DOI = {10.1145/2540930.2540971}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {EighthInternational Conference on Tangible, Embedded and Embodied Interaction (TEI 2014)}, ADDRESS = {Munich, Germany}, }
Endnote
%0 Conference Proceedings %A Ou, Jifei %A Yao, Lining %A Tauber, Daniel %A Steimle, J&#252;rgen %A Niiyama, Ryuma %A Ishii, Hiroshi %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T jamSheets: Thin Interfaces with Tunable Stiffness Enabled by Layer Jamming : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFDC-6 %R 10.1145/2540930.2540971 %D 2014 %B 8th International Conference on Tangible, Embedded and Embodied Interaction %Z date of event: 2014-02-16 - 2014-02-19 %C Munich, Germany %B EighthInternational Conference on Tangible, Embedded and Embodied Interaction %I ACM %@ 978-1-4503-2635-3
Oulasvirta, A., Suomalainen, T., Hamari, J., Lampinen, A., and Karvonen, K. 2014a. Transparency of Intentions Decreases Privacy Concerns in Ubiquitous Surveillance. Cyberpsychology, Behavior and Social Networking 17, 10.
Export
BibTeX
@article{OulasvirtaTransparency2014, TITLE = {Transparency of Intentions Decreases Privacy Concerns in Ubiquitous Surveillance}, AUTHOR = {Oulasvirta, Antti and Suomalainen, Tiia and Hamari, Juho and Lampinen, Airi and Karvonen, Kristiina}, LANGUAGE = {eng}, ISSN = {2152-2715}, DOI = {10.1089/cyber.2013.0585}, PUBLISHER = {Liebert}, ADDRESS = {New Rochelle, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Cyberpsychology, Behavior and Social Networking}, VOLUME = {17}, NUMBER = {10}, PAGES = {633--638}, }
Endnote
%0 Journal Article %A Oulasvirta, Antti %A Suomalainen, Tiia %A Hamari, Juho %A Lampinen, Airi %A Karvonen, Kristiina %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Transparency of Intentions Decreases Privacy Concerns in Ubiquitous Surveillance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C7B-F %F ISI: 000342507000001 %R 10.1089/cyber.2013.0585 %7 2014 %D 2014 %J Cyberpsychology, Behavior and Social Networking %V 17 %N 10 %& 633 %P 633 - 638 %I Liebert %C New Rochelle, NY %@ false
Oulasvirta, A., Weinkauf, T., Bachynskyi, M., and Palmas, G. 2014b. Gestikulieren mit Stil. Informatik Spektrum 37, 5.
Export
BibTeX
@article{oulasvirta14, TITLE = {{{Gestikulieren mit Stil}}}, AUTHOR = {Oulasvirta, Antti and Weinkauf, Tino and Bachynskyi, Myroslav and Palmas, Gregorio}, LANGUAGE = {deu}, ISSN = {0170-6012}, DOI = {10.1007/s00287-014-0816-2}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Informatik Spektrum}, VOLUME = {37}, NUMBER = {5}, PAGES = {449--453}, }
Endnote
%0 Journal Article %A Oulasvirta, Antti %A Weinkauf, Tino %A Bachynskyi, Myroslav %A Palmas, Gregorio %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Gestikulieren mit Stil : %G deu %U http://hdl.handle.net/11858/00-001M-0000-0024-4D19-3 %R 10.1007/s00287-014-0816-2 %7 2014 %D 2014 %J Informatik Spektrum %V 37 %N 5 %& 449 %P 449 - 453 %I Springer %C Berlin %@ false
Pajak, D., Herzog, R., Mantiuk, R., et al. 2014. Perceptual Depth Compression for Stereo Applications. Computer Graphics Forum (Proc. EUROGRAPHICS 2014) 33, 2.
Abstract
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.
Export
BibTeX
@article{PajakEG2014, TITLE = {Perceptual Depth Compression for Stereo Applications}, AUTHOR = {Pajak, Dawid and Herzog, Robert and Mantiuk, Rados{\l}aw and Didyk, Piotr and Eisemann, Elmar and Myszkowski, Karol and Pulli, Kari}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12293}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {33}, NUMBER = {2}, PAGES = {195--204}, BOOKTITLE = {EUROGRAPHICS 2014}, EDITOR = {L{\'e}vy, Bruno and Kautz, Jan}, }
Endnote
%0 Journal Article %A Pajak, Dawid %A Herzog, Robert %A Mantiuk, Rados&#322;aw %A Didyk, Piotr %A Eisemann, Elmar %A Myszkowski, Karol %A Pulli, Kari %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Perceptual Depth Compression for Stereo Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-3C0C-0 %R 10.1111/cgf.12293 %7 2014-06-01 %D 2014 %X Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes. %J Computer Graphics Forum %V 33 %N 2 %& 195 %P 195 - 204 %I Wiley-Blackwell %C Oxford, UK %B EUROGRAPHICS 2014 %O The European Association for Computer Graphics 35th Annual Conference ; Strasbourg, France, April 7th &#8211; 11th, 2014 EUROGRAPHICS 2014 EG 2014
Palmas, G., Bachynskyi, M., Oulasvirta, A., Seidel, H.-P., and Weinkauf, T. 2014a. MovExp: A Versatile Visualization Tool for Human-Computer Interaction Studies with 3D Performance and Biomechanical Data. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS 2014) 20, 12.
Abstract
In Human-Computer Interaction (HCI), experts seek to evaluate and compare the performance and ergonomics of user interfaces. Recently, a novel cost-efficient method for estimating physical ergonomics and performance has been introduced to HCI. It is based on optical motion capture and biomechanical simulation. It provides a rich source for analyzing human movements summarized in a multidimensional data set. Existing visualization tools do not sufficiently support the HCI experts in analyzing this data. We identified two shortcomings. First, appropriate visual encodings are missing particularly for the biomechanical aspects of the data. Second, the physical setup of the user interface cannot be incorporated explicitly into existing tools. We present MovExp, a versatile visualization tool that supports the evaluation of user interfaces. In particular, it can be easily adapted by the HCI experts to include the physical setup that is being evaluated, and visualize the data on top of it. Furthermore, it provides a variety of visual encodings to communicate muscular loads, movement directions, and other specifics of HCI studies that employ motion capture and biomechanical simulation. In this design study, we follow a problem-driven research approach. Based on a formalization of the visualization needs and the data structure, we formulate technical requirements for the visualization tool and present novel solutions to the analysis needs of the HCI experts. We show the utility of our tool with four case studies from the daily work of our HCI experts.
Export
BibTeX
@article{palmas14b, TITLE = {{MovExp}: A Versatile Visualization Tool for Human-Computer Interaction Studies with {3D} Performance and Biomechanical Data}, AUTHOR = {Palmas, Gregorio and Bachynskyi, Myroslav and Oulasvirta, Antti and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2014.2346311}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-12}, ABSTRACT = {In Human-Computer Interaction (HCI), experts seek to evaluate and compare the performance and ergonomics of user interfaces. Recently, a novel cost-efficient method for estimating physical ergonomics and performance has been introduced to HCI. It is based on optical motion capture and biomechanical simulation. It provides a rich source for analyzing human movements summarized in a multidimensional data set. Existing visualization tools do not sufficiently support the HCI experts in analyzing this data. We identified two shortcomings. First, appropriate visual encodings are missing particularly for the biomechanical aspects of the data. Second, the physical setup of the user interface cannot be incorporated explicitly into existing tools. We present MovExp, a versatile visualization tool that supports the evaluation of user interfaces. In particular, it can be easily adapted by the HCI experts to include the physical setup that is being evaluated, and visualize the data on top of it. Furthermore, it provides a variety of visual encodings to communicate muscular loads, movement directions, and other specifics of HCI studies that employ motion capture and biomechanical simulation. In this design study, we follow a problem-driven research approach. Based on a formalization of the visualization needs and the data structure, we formulate technical requirements for the visualization tool and present novel solutions to the analysis needs of the HCI experts. We show the utility of our tool with four case studies from the daily work of our HCI experts.}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS)}, VOLUME = {20}, NUMBER = {12}, PAGES = {2359--2368}, BOOKTITLE = {IEEE Visual Analytics Science \& Technology Conference, IEEE Information Visualization Conference, and IEEE Scientific Visualization Conference Proceedings 2014}, DEBUG = {author: Ebert, David; author: Hauser, Helwig; author: Heer, Jeffrey; author: North, Chris; author: Tory, Melanie; author: Qu, Huamin; author: Shen, Han-Wei; author: Ynnerman, Anders}, EDITOR = {Chen, Min}, }
Endnote
%0 Journal Article %A Palmas, Gregorio %A Bachynskyi, Myroslav %A Oulasvirta, Antti %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T MovExp: A Versatile Visualization Tool for Human-Computer Interaction Studies with 3D Performance and Biomechanical Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D4C-4 %R 10.1109/TVCG.2014.2346311 %7 2014 %D 2014 %X In Human-Computer Interaction (HCI), experts seek to evaluate and compare the performance and ergonomics of user interfaces. Recently, a novel cost-efficient method for estimating physical ergonomics and performance has been introduced to HCI. It is based on optical motion capture and biomechanical simulation. It provides a rich source for analyzing human movements summarized in a multidimensional data set. Existing visualization tools do not sufficiently support the HCI experts in analyzing this data. We identified two shortcomings. First, appropriate visual encodings are missing particularly for the biomechanical aspects of the data. Second, the physical setup of the user interface cannot be incorporated explicitly into existing tools. We present MovExp, a versatile visualization tool that supports the evaluation of user interfaces. In particular, it can be easily adapted by the HCI experts to include the physical setup that is being evaluated, and visualize the data on top of it. Furthermore, it provides a variety of visual encodings to communicate muscular loads, movement directions, and other specifics of HCI studies that employ motion capture and biomechanical simulation. In this design study, we follow a problem-driven research approach. Based on a formalization of the visualization needs and the data structure, we formulate technical requirements for the visualization tool and present novel solutions to the analysis needs of the HCI experts. We show the utility of our tool with four case studies from the daily work of our HCI experts. %J IEEE Transactions on Visualization and Computer Graphics %V 20 %N 12 %& 2359 %P 2359 - 2368 %I IEEE Computer Society %C Los Alamitos, CA %@ false %B IEEE Visual Analytics Science & Technology Conference, IEEE Information Visualization Conference, and IEEE Scientific Visualization Conference Proceedings 2014 %O Proceedings 2014 ; Paris, France, 9&#8211;14 November 2014 IEEE VIS 2014
Palmas, G., Bachynskyi, M., Oulasvirta, A., Seidel, H.-P., and Weinkauf, T. 2014b. An Edge-bundling Layout for Interactive Parallel Coordinates. PacificVis 2014, IEEE Pacific Visualization Symposium, IEEE Computer Society.
Abstract
Parallel Coordinates is an often used visualization method for multidimensional data sets. Its main challenges for large data sets are visual clutter and overplotting which hamper the recognition of patterns in the data. We present an edge-bundling method using density-based clustering for each dimension. This reduces clutter and provides a faster overview of clusters and trends. Moreover, it allows rendering the clustered lines using polygons, decreasing rendering time remarkably. In addition, we design interactions to support multidimensional clustering with this method. A user study shows improvements over the classic parallel coordinates plot in two user tasks: correlation estimation and subset tracing.
Export
BibTeX
@inproceedings{palmas14a, TITLE = {An Edge-bundling Layout for Interactive Parallel Coordinates}, AUTHOR = {Palmas, Gregorio and Bachynskyi, Myroslav and Oulasvirta, Antti and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, DOI = {10.1109/PacificVis.2014.40}, PUBLISHER = {IEEE Computer Society}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-03}, ABSTRACT = {Parallel Coordinates is an often used visualization method for multidimensional data sets. Its main challenges for large data sets are visual clutter and overplotting which hamper the recognition of patterns in the data. We present an edge-bundling method using density-based clustering for each dimension. This reduces clutter and provides a faster overview of clusters and trends. Moreover, it allows rendering the clustered lines using polygons, decreasing rendering time remarkably. In addition, we design interactions to support multidimensional clustering with this method. A user study shows improvements over the classic parallel coordinates plot in two user tasks: correlation estimation and subset tracing.}, BOOKTITLE = {PacificVis 2014, IEEE Pacific Visualization Symposium}, PAGES = {57--64}, ADDRESS = {Yokohama, Japan}, }
Endnote
%0 Conference Proceedings %A Palmas, Gregorio %A Bachynskyi, Myroslav %A Oulasvirta, Antti %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T An Edge-bundling Layout for Interactive Parallel Coordinates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D29-0 %R 10.1109/PacificVis.2014.40 %D 2014 %B IEEE Pacific Visualization Symposium %Z date of event: 2014-03-04 - 2014-03-07 %C Yokohama, Japan %X Parallel Coordinates is an often used visualization method for multidimensional data sets. Its main challenges for large data sets are visual clutter and overplotting which hamper the recognition of patterns in the data. We present an edge-bundling method using density-based clustering for each dimension. This reduces clutter and provides a faster overview of clusters and trends. Moreover, it allows rendering the clustered lines using polygons, decreasing rendering time remarkably. In addition, we design interactions to support multidimensional clustering with this method. A user study shows improvements over the classic parallel coordinates plot in two user tasks: correlation estimation and subset tracing. %B PacificVis 2014 %P 57 - 64 %I IEEE Computer Society
Pece, F., Tompkin, J., Pfister, H., Kautz, J., and Theobalt, C. 2014. Device Effect on Panoramic Video+Context Tasks. 11th European Conference on Visual Media Production (CVMP), ACM.
Abstract
<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>
Export
BibTeX
@inproceedings{PeceCVMP2014, TITLE = {Device Effect on Panoramic Video+Context Tasks}, AUTHOR = {Pece, Fabrizio and Tompkin, James and Pfister, Hanspeter and Kautz, Jan and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4503-3185-2}, DOI = {10.1145/2668904.2668943}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {<p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p>}, BOOKTITLE = {11th European Conference on Visual Media Production (CVMP)}, PAGES = {14:1--14:9}, ADDRESS = {London, UK}, }
Endnote
%0 Conference Proceedings %A Pece, Fabrizio %A Tompkin, James %A Pfister, Hanspeter %A Kautz, Jan %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Device Effect on Panoramic Video+Context Tasks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF6E-1 %R 10.1145/2668904.2668943 %D 2014 %B 11th European Conference on Visual Media Production %Z date of event: 2014-11-13 - 2014-11-14 %C London, UK %X <p>Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.</p> <p></p> %K immersion, multidisplay adaption, panoramas, video %B 11th European Conference on Visual Media Production (CVMP) %P 14:1 - 14:9 %I ACM %@ 978-1-4503-3185-2 %U http://doi.acm.org/10.1145/2668904.2668943
Polthier, K., Bobenko, A., Hildebrandt, K., et al. 2014. Geometry Processing. In: Matheon -- Mathematics for Key Technologies. European Mathematical Society, Zürich.
Export
BibTeX
@incollection{Polthier2014, TITLE = {Geometry Processing}, AUTHOR = {Polthier, Konrad and Bobenko, Alexander and Hildebrandt, Klaus and Kornhuber, Ralf and von Tycowicz, Christoph and Yserentant, Harry and Ziegler, G{\"u}nther M.}, LANGUAGE = {eng}, ISBN = {978-3-03719-137-8}, PUBLISHER = {European Mathematical Society}, ADDRESS = {Z{\"u}rich}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Matheon -- Mathematics for Key Technologies}, EDITOR = {Deuflhard, Peter and Gr{\"o}tschel, Martin and H{\"o}rnberg, Dietmar and Horst, Ulrich and Kramer, J{\"u}rg and Mehrmann, Volker and Schmidt, Frank and Sch{\"u}tte, Christof and Skutella, Martin and Sprekels, J{\"u}rgen}, PAGES = {341--355}, SERIES = {EMS Series in Industrial and Applied Mathematics}, VOLUME = {1}, }
Endnote
%0 Book Section %A Polthier, Konrad %A Bobenko, Alexander %A Hildebrandt, Klaus %A Kornhuber, Ralf %A von Tycowicz, Christoph %A Yserentant, Harry %A Ziegler, G&#252;nther M. %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Geometry Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D5C-0 %D 2014 %B Matheon -- Mathematics for Key Technologies %E Deuflhard, Peter; Gr&#246;tschel, Martin; H&#246;rnberg, Dietmar; Horst, Ulrich; Kramer, J&#252;rg; Mehrmann, Volker; Schmidt, Frank; Sch&#252;tte, Christof; Skutella, Martin; Sprekels, J&#252;rgen %P 341 - 355 %I European Mathematical Society %C Z&#252;rich %@ 978-3-03719-137-8 %S EMS Series in Industrial and Applied Mathematics %N 1
Rematas, K., Ritschel, T., Fritz, M., and Tuytelaars, T. 2014. Image-based Synthesis and Re-Synthesis of Viewpoints Guided by 3D Models. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), IEEE Computer Society.
Export
BibTeX
@inproceedings{kostas14cvpr, TITLE = {Image-based Synthesis and Re-Synthesis of Viewpoints Guided by {3D} Models}, AUTHOR = {Rematas, Konstantinos and Ritschel, Tobias and Fritz, Mario and Tuytelaars, Tinne}, LANGUAGE = {eng}, ISBN = {978-1-4799-5117-8}, DOI = {10.1109/CVPR.2014.498}, PUBLISHER = {IEEE Computer Society}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014)}, PAGES = {3898--3905}, ADDRESS = {Columbus, OH, USA}, }
Endnote
%0 Conference Proceedings %A Rematas, Konstantinos %A Ritschel, Tobias %A Fritz, Mario %A Tuytelaars, Tinne %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Image-based Synthesis and Re-Synthesis of Viewpoints Guided by 3D Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-8842-7 %R 10.1109/CVPR.2014.498 %D 2014 %B 2014 IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2014-06-24 - 2014-06-27 %C Columbus, OH, USA %B 2014 IEEE Conference on Computer Vision and Pattern Recognition %P 3898 - 3905 %I IEEE Computer Society %@ 978-1-4799-5117-8
Reshetouski, I. 2014. Kaleidoscopic Imaging. urn:nbn:de:bsz:291-scidok-59308.
Export
BibTeX
@phdthesis{ReshetouskiPhD2014, TITLE = {Kaleidoscopic Imaging}, AUTHOR = {Reshetouski, Ilya}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59308}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Reshetouski, Ilya %Y Seidel, Hans-Peter %A referee: Vetterli, Martin %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Kaleidoscopic Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C4-E %U urn:nbn:de:bsz:291-scidok-59308 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5930/
Rhodin, H., Tompkin, J., Kim, K.I., Varanasi, K., Seidel, H.-P., and Theobalt, C. 2014. Interactive Motion Mapping for Real-time Character Control. Computer Graphics Forum (Proc. EUROGRAPHICS 2014) 33, 2.
Export
BibTeX
@article{RhodinCGF2014, TITLE = {Interactive Motion Mapping for Real-time Character Control}, AUTHOR = {Rhodin, Helge and Tompkin, James and Kim, Kwang In and Varanasi, Kiran and Seidel, Hans-Peter and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12325}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-05}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {33}, NUMBER = {2}, PAGES = {273--282}, BOOKTITLE = {EUROGRAPHICS 2014}, EDITOR = {L{\'e}vy, Bruno and Kautz, Jan}, }
Endnote
%0 Journal Article %A Rhodin, Helge %A Tompkin, James %A Kim, Kwang In %A Varanasi, Kiran %A Seidel, Hans-Peter %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive Motion Mapping for Real-time Character Control : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-8096-6 %F ISI: 000337543000028 %R 10.1111/cgf.12325 %7 2014 %D 2014 %J Computer Graphics Forum %V 33 %N 2 %& 273 %P 273 - 282 %I Wiley-Blackwell %C Oxford, UK %@ false %B EUROGRAPHICS 2014 %O The European Association for Computer Graphics 35th Annual Conference ; Strasbourg, France, April 7th &#8211; 11th, 2014 EUROGRAPHICS 2014 EG 2014
Richardt, C., Lopez‐Moreno, J., Bousseau, A., Agrawala, M., and Drettakis, G. 2014. Vectorising Bitmaps into Semi‐Transparent Gradient Layers. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2014) 33, 4.
Export
BibTeX
@article{Richardt2014, TITLE = {Vectorising Bitmaps into Semi-Transparent Gradient Layers}, AUTHOR = {Richardt, Christian and Lopez-Moreno, Jorge and Bousseau, Adrien and Agrawala, Mameesh and Drettakis, George}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.12408}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {33}, NUMBER = {4}, PAGES = {11--19}, BOOKTITLE = {Eurographics Symposium on Rendering 2014}, EDITOR = {Wojciech, Jarosz and Peers, Pieter}, }
Endnote
%0 Journal Article %A Richardt, Christian %A Lopez&#8208;Moreno, Jorge %A Bousseau, Adrien %A Agrawala, Mameesh %A Drettakis, George %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Vectorising Bitmaps into Semi&#8208;Transparent Gradient Layers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-F50E-A %R 10.1111/cgf.12408 %7 2014-07-15 %D 2014 %J Computer Graphics Forum %V 33 %N 4 %& 11 %P 11 - 19 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2014 %O EGSR 2014 Eurographics Symposium on Rendering 2014 Lyon, France ; 25 - 27 June 2014
Robertini, N., De Aguiar, E., Helten, T., and Theobalt, C. 2014. Efficient Multi-view Performance Capture of Fine-scale Surface Detail. 3DV 2014, International Conference on 3D Vision, IEEE Computer Society.
Export
BibTeX
@inproceedings{Robertini:2014, TITLE = {Efficient Multi-view Performance Capture of Fine-scale Surface Detail}, AUTHOR = {Robertini, Nadia and De Aguiar, Edilson and Helten, Thomas and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4799-7001-8}, DOI = {10.1109/3DV.2014.46}, PUBLISHER = {IEEE Computer Society}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {3DV 2014, International Conference on 3D Vision}, PAGES = {5--12}, ADDRESS = {Tokyo, Japan}, }
Endnote
%0 Conference Proceedings %A Robertini, Nadia %A De Aguiar, Edilson %A Helten, Thomas %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Multi-view Performance Capture of Fine-scale Surface Detail : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6876-C %R 10.1109/3DV.2014.46 %D 2014 %B International Conference on 3D Vision %Z date of event: 2014-12-08 - 2014-12-11 %C Tokyo, Japan %B 3DV 2014 %P 5 - 12 %I IEEE Computer Society %@ 978-1-4799-7001-8
Roo, J.S. and Richardt, C. 2014. Temporally Coherent Video De-Anaglyph. ACM SIGGRAPH 2014 Talks, ACM.
Export
BibTeX
@inproceedings{Roo2014, TITLE = {Temporally Coherent Video De-Anaglyph}, AUTHOR = {Roo, Joan Sol and Richardt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4503-2960-6}, DOI = {10.1145/2614106.2614125}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {ACM SIGGRAPH 2014 Talks}, EID = {75}, ADDRESS = {Vancouver, Canada}, }
Endnote
%0 Conference Proceedings %A Roo, Joan Sol %A Richardt, Christian %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Temporally Coherent Video De-Anaglyph : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF85-B %R 10.1145/2614106.2614125 %D 2014 %B 41st International Conference and Exhibition on Computer Graphics and Interactive Techniques %Z date of event: 2014-08-10 - 2014-08-14 %C Vancouver, Canada %B ACM SIGGRAPH 2014 Talks %Z sequence number: 75 %I ACM %@ 978-1-4503-2960-6
Saikia, H., Seidel, H.-P., and Weinkauf, T. 2014. Extended Branch Decomposition Graphs: Structural Comparison of Scalar Data. Computer Graphics Forum (Proc. EuroVis 2014) 33, 3.
Abstract
We present a method to find repeating topological structures in scalar data sets. More precisely, we compare all subtrees of two merge trees against each other - in an efficient manner exploiting redundancy. This provides pair-wise distances between the topological structures defined by sub/superlevel sets, which can be exploited in several applications such as finding similar structures in the same data set, assessing periodic behavior in time-dependent data, and comparing the topology of two different data sets. To do so, we introduce a novel data structure called the extended branch decomposition graph, which is composed of the branch decompositions of all subtrees of the merge tree. Based on dynamic programming, we provide two highly efficient algorithms for computing and comparing extended branch decomposition graphs. Several applications attest to the utility of our method and its robustness against noise.
Export
BibTeX
@article{saikia14a, TITLE = {Extended Branch Decomposition Graphs: {Structural} Comparison of Scalar Data}, AUTHOR = {Saikia, Himangshu and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12360}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {We present a method to find repeating topological structures in scalar data sets. More precisely, we compare all subtrees of two merge trees against each other -- in an efficient manner exploiting redundancy. This provides pair-wise distances between the topological structures defined by sub/superlevel sets, which can be exploited in several applications such as finding similar structures in the same data set, assessing periodic behavior in time-dependent data, and comparing the topology of two different data sets. To do so, we introduce a novel data structure called the extended branch decomposition graph, which is composed of the branch decompositions of all subtrees of the merge tree. Based on dynamic programming, we provide two highly efficient algorithms for computing and comparing extended branch decomposition graphs. Several applications attest to the utility of our method and its robustness against noise.}, JOURNAL = {Computer Graphics Forum (Proc. EuroVis)}, VOLUME = {33}, NUMBER = {3}, PAGES = {41--50}, BOOKTITLE = {Eurographics Conference on Visualization 2014 (EuroVis 2014)}, EDITOR = {Carr, Hamish and Rheingans, Penny and Schumann, Heidrun}, }
Endnote
%0 Journal Article %A Saikia, Himangshu %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Extended Branch Decomposition Graphs: Structural Comparison of Scalar Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4FFB-A %R 10.1111/cgf.12360 %7 2014 %D 2014 %X We present a method to find repeating topological structures in scalar data sets. More precisely, we compare all subtrees of two merge trees against each other - in an efficient manner exploiting redundancy. This provides pair-wise distances between the topological structures defined by sub/superlevel sets, which can be exploited in several applications such as finding similar structures in the same data set, assessing periodic behavior in time-dependent data, and comparing the topology of two different data sets. To do so, we introduce a novel data structure called the extended branch decomposition graph, which is composed of the branch decompositions of all subtrees of the merge tree. Based on dynamic programming, we provide two highly efficient algorithms for computing and comparing extended branch decomposition graphs. Several applications attest to the utility of our method and its robustness against noise. %J Computer Graphics Forum %V 33 %N 3 %& 41 %P 41 - 50 %I Wiley-Blackwell %C Oxford %B Eurographics Conference on Visualization 2014 %O EuroVis 2014 Swansea, Wales, UK, June 9 &#8211; 13, 2014
Schulz, C., von Tycowicz, C., Seidel, H.-P., and Hildebrandt, K. 2014. Animating Deformable Objects Using Sparse Spacetime Constraints. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2014) 33, 4.
Export
BibTeX
@article{Schulz2014, TITLE = {Animating Deformable Objects Using Sparse Spacetime Constraints}, AUTHOR = {Schulz, Christian and von Tycowicz, Christoph and Seidel, Hans-Peter and Hildebrandt, Klaus}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2601097.2601156}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {33}, NUMBER = {4}, PAGES = {1--10}, EID = {109}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2014}, }
Endnote
%0 Journal Article %A Schulz, Christian %A von Tycowicz, Christoph %A Seidel, Hans-Peter %A Hildebrandt, Klaus %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Animating Deformable Objects Using Sparse Spacetime Constraints : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EE18-5 %R 10.1145/2601097.2601156 %7 2014 %D 2014 %K model reduction, optimal control, physically&#8208;based animation, spacetime constraints, wiggly splines %J ACM Transactions on Graphics %V 33 %N 4 %& 1 %P 1 - 10 %Z sequence number: 109 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2014 %O ACM SIGGRAPH 2014 Vancouver, BC, Canada
Schulze, M., Martinez Esturo, J., Günther, T., et al. 2014. Sets of Globally Optimal Stream Surfaces for Flow Visualization. Computer Graphics Forum (Proc. EuroVis 2014) 33, 3.
Export
BibTeX
@article{Schulze2014, TITLE = {Sets of Globally Optimal Stream Surfaces for Flow Visualization}, AUTHOR = {Schulze, Maik and Martinez Esturo, Janick and G{\"u}nther, T. and R{\"o}ssl, Christian and Seidel, Hans-Peter and Weinkauf, Tino and Theisel, Holger}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12356}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. EuroVis)}, VOLUME = {33}, NUMBER = {3}, PAGES = {1--10}, BOOKTITLE = {Eurographics Conference on Visualization (EuroVis 2014)}, EDITOR = {Carr, Hamish and Rheingans, Penny and Schumann, Heidrun}, }
Endnote
%0 Journal Article %A Schulze, Maik %A Martinez Esturo, Janick %A G&#252;nther, T. %A R&#246;ssl, Christian %A Seidel, Hans-Peter %A Weinkauf, Tino %A Theisel, Holger %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Sets of Globally Optimal Stream Surfaces for Flow Visualization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-F518-1 %R 10.1111/cgf.12356 %7 2014-07-12 %D 2014 %K Categories and Subject Descriptors (according to ACM CCS), I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling&#8212;Geometric algorithms, languages, and systems %J Computer Graphics Forum %V 33 %N 3 %& 1 %P 1 - 10 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Conference on Visualization %O EuroVis 2014 Swansea, Wales, UK, June 9 &#8211; 13, 2014
Serrà, J., Mueller, M., Grosche, P., and Arcos, J.L. 2014. Unsupervised Music Structure Annotation by Time Series Structure Features and Segment Similarity. IEEE Transactions on Multimedia 16, 5.
Export
BibTeX
@article{SerraMuellerGroscheArcos2014, TITLE = {Unsupervised Music Structure Annotation by Time Series Structure Features and Segment Similarity}, AUTHOR = {Serr{\`a}, Joan and Mueller, Meinard and Grosche, Peter and Arcos, Josep Ll}, LANGUAGE = {eng}, ISSN = {1520-9210}, DOI = {10.1109/TMM.2014.2310701}, PUBLISHER = {IEEE}, ADDRESS = {Piscataway, NJ}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-08}, JOURNAL = {IEEE Transactions on Multimedia}, VOLUME = {16}, NUMBER = {5}, PAGES = {1229--1240}, }
Endnote
%0 Journal Article %A Serr&#224;, Joan %A Mueller, Meinard %A Grosche, Peter %A Arcos, Josep Ll %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Unsupervised Music Structure Annotation by Time Series Structure Features and Segment Similarity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-7FD2-5 %F ISI: 000340295600006 %R 10.1109/TMM.2014.2310701 %7 2014 %D 2014 %J IEEE Transactions on Multimedia %V 16 %N 5 %& 1229 %P 1229 - 1240 %I IEEE %C Piscataway, NJ %@ false
Solomon, J., Rustamov, R., Guibas, L., and Butscher, A. 2014. Earth Mover’s Distances on Discrete Surfaces. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2014) 33, 4.
Export
BibTeX
@article{SolomonSIGGRAPH2014, TITLE = {Earth Mover's Distances on Discrete Surfaces}, AUTHOR = {Solomon, Justin and Rustamov, Raif and Guibas, Leonidas and Butscher, Adrian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2601097.2601175}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {33}, NUMBER = {4}, PAGES = {1--12}, EID = {67}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2014}, }
Endnote
%0 Journal Article %A Solomon, Justin %A Rustamov, Raif %A Guibas, Leonidas %A Butscher, Adrian %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Earth Mover's Distances on Discrete Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-8044-F %F ISI: 000340000100034 %R 10.1145/2601097.2601175 %7 2014-07 %D 2014 %J ACM Transactions on Graphics %V 33 %N 4 %& 1 %P 1 - 12 %Z sequence number: 67 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2014 %O ACM SIGGRAPH 2014 Vancouver, BC, Canada
Sridhar, S., Oulasvirta, A., and Theobalt, C. 2014a. Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera. Max-Planck-Institut für Informatik, Saarbrücken.
Export
BibTeX
@techreport{Sridhar2014, TITLE = {Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera}, AUTHOR = {Sridhar, Srinath and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2014-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, TYPE = {Research Report}, }
Endnote
%0 Report %A Sridhar, Srinath %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-B5B8-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2014 %P 14 p. %B Research Report %@ false
Sridhar, S., Rhodin, H., Seidel, H.-P., Oulasvirta, A., and Theobalt, C. 2014b. Real-time Hand Tracking Using a Sum of Anisotropic Gaussians Model. 3DV 2014, International Conference on 3D Vision, IEEE Computer Society.
Export
BibTeX
@inproceedings{sridhar2014real, TITLE = {Real-time Hand Tracking Using a Sum of Anisotropic {Gaussians} Model}, AUTHOR = {Sridhar, Srinath and Rhodin, Helge and Seidel, Hans-Peter and Oulasvirta, Antti and Theobalt, Christian}, LANGUAGE = {eng}, ISBN = {978-1-4799-7001-8}, DOI = {10.1109/3DV.2014.37}, PUBLISHER = {IEEE Computer Society}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {3DV 2014, International Conference on 3D Vision}, PAGES = {319--326}, ADDRESS = {Tokyo, Japan}, }
Endnote
%0 Conference Proceedings %A Sridhar, Srinath %A Rhodin, Helge %A Seidel, Hans-Peter %A Oulasvirta, Antti %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Hand Tracking Using a Sum of Anisotropic Gaussians Model : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-69E9-F %R 10.1109/3DV.2014.37 %D 2014 %B International Conference on 3D Vision %Z date of event: 2014-12-08 - 2014-12-11 %C Tokyo, Japan %B 3DV 2014 %P 319 - 326 %I IEEE Computer Society %@ 978-1-4799-7001-8
Steimle, J., Hornecker, E., and Schmidt, A. 2014. Interaction Beyond the Desktop. Informatik Spektrum 37, 5.
Export
BibTeX
@article{Steimle2014, TITLE = {Interaction Beyond the Desktop}, AUTHOR = {Steimle, J{\"u}rgen and Hornecker, Eva and Schmidt, Albrecht}, LANGUAGE = {eng}, ISSN = {0170-6012}, DOI = {10.1007/s00287-014-0831-3}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Informatik Spektrum}, VOLUME = {37}, NUMBER = {5}, PAGES = {385--385}, }
Endnote
%0 Journal Article %A Steimle, J&#252;rgen %A Hornecker, Eva %A Schmidt, Albrecht %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Interaction Beyond the Desktop : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-BFEC-2 %R 10.1007/s00287-014-0831-3 %7 2014 %D 2014 %J Informatik Spektrum %V 37 %N 5 %& 385 %P 385 - 385 %I Springer %C Berlin %@ false
Stopper, G. 2014. Data-guided Flow Illustration. .
Export
BibTeX
@mastersthesis{2014Master:StopperGebhard, TITLE = {Data-guided Flow Illustration}, AUTHOR = {Stopper, Gebhard}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Stopper, Gebhard %Y Weinkauf, Tino %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Data-guided Flow Illustration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-8391-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P XI, 79 p. %V master %9 master
Sykora, D., Kavan, L., Čadík, M., et al. 2014. Ink-and-ray: Bas-relief Meshes for Adding Global Illumination Effects to Hand-drawn Characters. ACM Transactions on Graphics 33, 2.
Export
BibTeX
@article{Sykora2014, TITLE = {Ink-and-ray: {Bas-relief} Meshes for Adding Global Illumination Effects to Hand-drawn Characters}, AUTHOR = {Sykora, Daniel and Kavan, Ladislav and {\v C}ad{\'i}k, Martin and Jamri{\v s}ka, Ond{\v r}ej and Jacobson, Alec and Whited, Brian and Simmons, Maryann and Sorkine-Hornung, Olga}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2591011}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {33}, NUMBER = {2}, PAGES = {1--15}, EID = {16}, }
Endnote
%0 Journal Article %A Sykora, Daniel %A Kavan, Ladislav %A &#268;ad&#237;k, Martin %A Jamri&#353;ka, Ond&#345;ej %A Jacobson, Alec %A Whited, Brian %A Simmons, Maryann %A Sorkine-Hornung, Olga %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Ink-and-ray: Bas-relief Meshes for Adding Global Illumination Effects to Hand-drawn Characters : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-83AD-3 %R 10.1145/2591011 %7 2014-03-01 %D 2014 %J ACM Transactions on Graphics %V 33 %N 2 %& 1 %P 1 - 15 %Z sequence number: 16 %I ACM %C New York, NY %@ false
Templin, K., Didyk, P., Myszkowski, K., Hefeeda, M.M., Seidel, H.-P., and Matusik, W. 2014a. Modeling and Optimizing Eye Vergence Response to Stereoscopic Cuts. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2014) 33, 4.
Export
BibTeX
@article{Templin:2014:MOE:2601097.2601148, TITLE = {Modeling and Optimizing Eye Vergence Response to Stereoscopic Cuts}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Myszkowski, Karol and Hefeeda, Mohamed M. and Seidel, Hans-Peter and Matusik, Wojciech}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2601097.2601148}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {33}, NUMBER = {4}, PAGES = {1--8}, EID = {145}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2014}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Myszkowski, Karol %A Hefeeda, Mohamed M. %A Seidel, Hans-Peter %A Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Modeling and Optimizing Eye Vergence Response to Stereoscopic Cuts : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EE16-9 %R 10.1145/2601097.2601148 %7 2014 %D 2014 %K S3D, binocular, eye&#8208;tracking %J ACM Transactions on Graphics %V 33 %N 4 %& 1 %P 1 - 8 %Z sequence number: 145 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2014 %O ACM SIGGRAPH 2014 Vancouver, BC, Canada
Templin, K., Didyk, P., Myszkowski, K., and Seidel, H.-P. 2014b. Perceptually-motivated Stereoscopic Film Grain. Computer Graphics Forum (Proc. Pacific Graphics 2014) 33, 7.
Export
BibTeX
@article{Templin2014b, TITLE = {Perceptually-motivated Stereoscopic Film Grain}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12503}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {33}, NUMBER = {7}, PAGES = {349--358}, BOOKTITLE = {22nd Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2014)}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually-motivated Stereoscopic Film Grain : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5DF2-B %R 10.1111/cgf.12503 %7 2014-10-28 %D 2014 %J Computer Graphics Forum %V 33 %N 7 %& 349 %P 349 - 358 %I Wiley-Blackwell %C Oxford %B 22nd Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2014 PG 2014 8 to 10 Oct 2014, Seoul, South Korea
Tevs, A., Huang, Q., Wand, M., Seidel, H.-P., and Guibas, L. 2014. Relating Shapes via Geometric Symmetries and Regularities. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2014) 33, 4.
Export
BibTeX
@article{TevsSIGGRAPH2014, TITLE = {Relating Shapes via Geometric Symmetries and Regularities}, AUTHOR = {Tevs, Art and Huang, Qixing and Wand, Michael and Seidel, Hans-Peter and Guibas, Leonidas}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2601097.2601220}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {33}, NUMBER = {4}, PAGES = {1--12}, EID = {119}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2014}, }
Endnote
%0 Journal Article %A Tevs, Art %A Huang, Qixing %A Wand, Michael %A Seidel, Hans-Peter %A Guibas, Leonidas %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Relating Shapes via Geometric Symmetries and Regularities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-8052-F %F ISI: 000340000100086 %R 10.1145/2601097.2601220 %7 2014-07 %D 2014 %J ACM Transactions on Graphics %V 33 %N 4 %& 1 %P 1 - 12 %Z sequence number: 119 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2014 %O Vancouver, BC, Canada ACM SIGGRAPH 2014
Tiab, J. 2014. Design and Evaluation Techniques for Cuttable Multi-touch Sensor Sheets. .
Export
BibTeX
@mastersthesis{TiabMastersThesis2014, TITLE = {Design and Evaluation Techniques for Cuttable Multi-touch Sensor Sheets}, AUTHOR = {Tiab, John}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Tiab, John %Y Steimle, J&#252;rgen %A referee: Kr&#252;ger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Design and Evaluation Techniques for Cuttable Multi-touch Sensor Sheets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D8E-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master
Vadgama, N. 2014. Design and Implementation of Single Layer Deformation Sensors. .
Export
BibTeX
@mastersthesis{VadgamaMastersThesis2014, TITLE = {Design and Implementation of Single Layer Deformation Sensors}, AUTHOR = {Vadgama, Nirzaree}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Vadgama, Nirzaree %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Design and Implementation of Single Layer Deformation Sensors : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D8A-6 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master
Vangorp, P., Mantiuk, R., Bazyluk, B., et al. 2014. Depth from HDR: Depth Induction or Increased Realism? SAP 2014, ACM Symposium on Applied Perception, ACM.
Export
BibTeX
@inproceedings{Vangorp2014, TITLE = {Depth from {HDR}: {Depth} Induction or Increased Realism?}, AUTHOR = {Vangorp, Peter and Mantiuk, Rafal and Bazyluk, Bartosz and Myszkowski, Karol and Mantiuk, Rados{\textbackslash}law and Watt, Simon J. and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4503-3009-1}, DOI = {10.1145/2628257.2628258}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {SAP 2014, ACM Symposium on Applied Perception}, EDITOR = {Bailey, Reynold and Kuhl, Scott}, PAGES = {71--78}, ADDRESS = {Vancouver, Canada}, }
Endnote
%0 Conference Proceedings %A Vangorp, Peter %A Mantiuk, Rafal %A Bazyluk, Bartosz %A Myszkowski, Karol %A Mantiuk, Rados\law %A Watt, Simon J. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Depth from HDR: Depth Induction or Increased Realism? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-34DB-5 %R 10.1145/2628257.2628258 %D 2014 %B ACM Symposium on Applied Perception %Z date of event: 2014-08-08 - 2014-08-09 %C Vancouver, Canada %K binocular disparity, contrast, luminance, stereo 3D %B SAP 2014 %E Bailey, Reynold; Kuhl, Scott %P 71 - 78 %I ACM %@ 978-1-4503-3009-1
Vihavainen, S., Lampinen, A., Oulasvirta, A., Silfverberg, S., and Lehmuskallio, A. 2014. The Clash between Privacy and Automation in Social Media. IEEE Pervasive Computing 13, 1.
Export
BibTeX
@article{vihavainen2013privacy, TITLE = {The Clash between Privacy and Automation in Social Media}, AUTHOR = {Vihavainen, Sami and Lampinen, Airi and Oulasvirta, Antti and Silfverberg, Suvi and Lehmuskallio, Asko}, LANGUAGE = {eng}, ISSN = {1536-1268}, DOI = {10.1109/MPRV.2013.25}, LOCALID = {Local-ID: B09A0D6E64E81ADAC1257AF00020D3A5-Vihavainen2013}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {IEEE Pervasive Computing}, VOLUME = {13}, NUMBER = {1}, PAGES = {56--63}, }
Endnote
%0 Journal Article %A Vihavainen, Sami %A Lampinen, Airi %A Oulasvirta, Antti %A Silfverberg, Suvi %A Lehmuskallio, Asko %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T The Clash between Privacy and Automation in Social Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-17FB-F %R 10.1109/MPRV.2013.25 %F OTHER: Local-ID: B09A0D6E64E81ADAC1257AF00020D3A5-Vihavainen2013 %7 2014-02-28 %D 2014 %J IEEE Pervasive Computing %V 13 %N 1 %& 56 %P 56 - 63 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Vorba, J., Karlik, O., Sik, M., Ritschel, T., and Krivanek, J. 2014. On-line Learning of Parametric Mixture Models for Light Transport Simulation. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2014) 33, 4.
Export
BibTeX
@article{vorba2014line, TITLE = {On-line Learning of Parametric Mixture Models for Light Transport Simulation}, AUTHOR = {Vorba, Jiri and Karlik, Ondrej and Sik, Martin and Ritschel, Tobias and Krivanek, Jaroslav}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2601097.2601203}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {33}, NUMBER = {4}, PAGES = {1--11}, EID = {101}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2014}, }
Endnote
%0 Journal Article %A Vorba, Jiri %A Karlik, Ondrej %A Sik, Martin %A Ritschel, Tobias %A Krivanek, Jaroslav %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T On-line Learning of Parametric Mixture Models for Light Transport Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-6C54-5 %F ISI: 000340000100068 %R 10.1145/2601097.2601203 %7 2014-07 %D 2014 %J ACM Transactions on Graphics %V 33 %N 4 %& 1 %P 1 - 11 %Z sequence number: 101 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2014 %O ACM SIGGRAPH 2014 Vancouver, BC, Canada
Wang, L. and Li, C. 2014. Spectrum-based Kernel Length Estimation for Gaussian Process Classification. IEEE Transactions on Cybernetics 44, 6.
Export
BibTeX
@article{Li2012z, TITLE = {Spectrum-based Kernel Length Estimation for {Gaussian} Process Classification}, AUTHOR = {Wang, Liang and Li, Chuan}, LANGUAGE = {eng}, ISSN = {2168-2267}, DOI = {10.1109/TCYB.2013.2273077}, LOCALID = {Local-ID: 8EB72C89BC30E9D6C1257C13003876D7-Li2012}, PUBLISHER = {IEEE}, ADDRESS = {Piscataway, NJ}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {IEEE Transactions on Cybernetics}, VOLUME = {44}, NUMBER = {6}, PAGES = {805--816}, }
Endnote
%0 Journal Article %A Wang, Liang %A Li, Chuan %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Spectrum-based Kernel Length Estimation for Gaussian Process Classification : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3CFB-B %R 10.1109/TCYB.2013.2273077 %F OTHER: Local-ID: 8EB72C89BC30E9D6C1257C13003876D7-Li2012 %7 2013-07-02 %D 2014 %J IEEE Transactions on Cybernetics %V 44 %N 6 %& 805 %P 805 - 816 %I IEEE %C Piscataway, NJ %@ false
Wang, Z., Martinez Esturo, J., Seidel, H.-P., and Weinkauf, T. 2014. Pattern Search in Flows based on Similarity of Stream Line Segments. VMV 2014 Vision, Modeling and Visualization, Eurographics Association.
Abstract
We propose a method that allows users to define flow features in form of patterns represented as sparse sets of stream line segments. Our approach finds similar occurrences in the same or other time steps. Related approaches define patterns using dense, local stencils or support only single segments. Our patterns are defined sparsely and can have a significant extent, i.e., they are integration-based and not local. This allows for a greater flexibility in defining features of interest. Similarity is measured using intrinsic curve properties only, which enables invariance to location, orientation, and scale. Our method starts with splitting stream lines using globally-consistent segmentation criteria. It strives to maintain the visually apparent features of the flow as a collection of stream line segments. Most importantly, it provides similar segmentations for similar flow structures. For user-defined patterns of curve segments, our algorithm finds similar ones that are invariant to similarity transformations. We showcase the utility of our method using different 2D and 3D flow fields.
Export
BibTeX
@inproceedings{wang14, TITLE = {Pattern Search in Flows based on Similarity of Stream Line Segments}, AUTHOR = {Wang, Zhongjie and Martinez Esturo, Janick and Seidel, Hans-Peter and Weinkauf, Tino}, LANGUAGE = {eng}, PUBLISHER = {Eurographics Association}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014-10}, ABSTRACT = {We propose a method that allows users to define flow features in form of patterns represented as sparse sets of stream line segments. Our approach finds similar occurrences in the same or other time steps. Related approaches define patterns using dense, local stencils or support only single segments. Our patterns are defined sparsely and can have a significant extent, i.e., they are integration-based and not local. This allows for a greater flexibility in defining features of interest. Similarity is measured using intrinsic curve properties only, which enables invariance to location, orientation, and scale. Our method starts with splitting stream lines using globally-consistent segmentation criteria. It strives to maintain the visually apparent features of the flow as a collection of stream line segments. Most importantly, it provides similar segmentations for similar flow structures. For user-defined patterns of curve segments, our algorithm finds similar ones that are invariant to similarity transformations. We showcase the utility of our method using different 2D and 3D flow fields.}, BOOKTITLE = {VMV 2014 Vision, Modeling and Visualization}, DEBUG = {author: von Landesberger, Tatiana; author: Theisel, Holger; author: Urban, Philipp}, EDITOR = {Bender, Jan and Kuijper, Arjan}, PAGES = {23--30}, ADDRESS = {Darmstadt, Germany}, }
Endnote
%0 Conference Proceedings %A Wang, Zhongjie %A Martinez Esturo, Janick %A Seidel, Hans-Peter %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Pattern Search in Flows based on Similarity of Stream Line Segments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5337-3 %D 2014 %B 19th International Workshop on Vision, Modeling and Visualization %Z date of event: 2014-10-08 - 2014-10-10 %C Darmstadt, Germany %X We propose a method that allows users to define flow features in form of patterns represented as sparse sets of stream line segments. Our approach finds similar occurrences in the same or other time steps. Related approaches define patterns using dense, local stencils or support only single segments. Our patterns are defined sparsely and can have a significant extent, i.e., they are integration-based and not local. This allows for a greater flexibility in defining features of interest. Similarity is measured using intrinsic curve properties only, which enables invariance to location, orientation, and scale. Our method starts with splitting stream lines using globally-consistent segmentation criteria. It strives to maintain the visually apparent features of the flow as a collection of stream line segments. Most importantly, it provides similar segmentations for similar flow structures. For user-defined patterns of curve segments, our algorithm finds similar ones that are invariant to similarity transformations. We showcase the utility of our method using different 2D and 3D flow fields. %B VMV 2014 Vision, Modeling and Visualization %E Bender, Jan; Kuijper, Arjan; von Landesberger, Tatiana; Theisel, Holger; Urban, Philipp %P 23 - 30 %I Eurographics Association %U http://tinoweinkauf.net/
Weigel, M., Mehta, V., and Steimle, J. 2014. More Than Touch: Understanding How People Use Skin as an Input Surface for Mobile Computing. CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems, ACM.
Abstract
Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further.
Export
BibTeX
@inproceedings{WeigelMehtaSteimle2014, TITLE = {More Than Touch: {Understanding} How People Use Skin as an Input Surface for Mobile Computing}, AUTHOR = {Weigel, Martin and Mehta, Vikram and Steimle, J{\"u}rgen}, LANGUAGE = {eng}, ISBN = {978-1-4503-2473-1}, URL = {http://doi.acm.org/10.1145/2556288.2557239}, DOI = {10.1145/2556288.2557239}, PUBLISHER = {ACM}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further.}, BOOKTITLE = {CHI 2014, 32nd Annual ACM Conference on Human Factors in Computing Systems}, PAGES = {179--188}, ADDRESS = {Toronto, Canada}, }
Endnote
%0 Conference Proceedings %A Weigel, Martin %A Mehta, Vikram %A Steimle, J&#252;rgen %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T More Than Touch: Understanding How People Use Skin as an Input Surface for Mobile Computing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-4D36-3 %R 10.1145/2556288.2557239 %U http://doi.acm.org/10.1145/2556288.2557239 %D 2014 %B 32nd Annual ACM Conference on Human Factors in Computing Systems %Z date of event: 2014-04-26 - 2014-05-01 %C Toronto, Canada %X Motion-capture-based biomechanical simulation is a non-invasive analysis method that yields a rich description of posture, joint, and muscle activity in human movement. The method is presently gaining ground in sports, medicine, and industrial ergonomics, but it also bears great potential for studies in HCI where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two validation studies conducted with a state-of-the-art system. Study I tested aimed movements ranging from multitouch gestures to dancing, finding out that the critical limiting factor is the size of movement. Study II compared muscle activation predictions to surface-EMG recordings in a 3D pointing task. The data shows medium-to-high validity that is, however, constrained by some characteristics of the movement and the user. We draw concrete recommendations to practitioners and discuss challenges to developing the method further. %B CHI 2014 %P 179 - 188 %I ACM %@ 978-1-4503-2473-1
Weinkauf, T. 2014a. On the (Un)Suitability of Strict Feature Definitions for Uncertain Data. In: Scientific Visualization. Springer, London.
Abstract
We discuss strategies to successfully work with strict feature definitions such as topology in the presence of noisy/uncertain data. To that end, we review previous work from the literature and identify three strategies: the development of fuzzy analogs to strict feature definitions, the aggregation of features, and the filtering of features. Regarding the latter, we will present a detailed discussion of filtering ridges/valleys and topological structures.
Export
BibTeX
@incollection{weinkauf14b, TITLE = {On the (Un)Suitability of Strict Feature Definitions for Uncertain Data}, AUTHOR = {Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-1-4471-6496-8}, DOI = {10.1007/978-1-4471-6497-5_4}, PUBLISHER = {Springer}, ADDRESS = {London}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {We discuss strategies to successfully work with strict feature definitions such as topology in the presence of noisy/uncertain data. To that end, we review previous work from the literature and identify three strategies: the development of fuzzy analogs to strict feature definitions, the aggregation of features, and the filtering of features. Regarding the latter, we will present a detailed discussion of filtering ridges/valleys and topological structures.}, BOOKTITLE = {Scientific Visualization}, EDITOR = {Hansen, Charles D. and Chen, Min and Johnson, Christopher R. and Kaufman, Arie E. and Hagen, Hans}, PAGES = {45--50}, }
Endnote
%0 Book Section %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society %T On the (Un)Suitability of Strict Feature Definitions for Uncertain Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5315-F %R 10.1007/978-1-4471-6497-5_4 %D 2014 %X We discuss strategies to successfully work with strict feature definitions such as topology in the presence of noisy/uncertain data. To that end, we review previous work from the literature and identify three strategies: the development of fuzzy analogs to strict feature definitions, the aggregation of features, and the filtering of features. Regarding the latter, we will present a detailed discussion of filtering ridges/valleys and topological structures. %B Scientific Visualization %E Hansen, Charles D.; Chen, Min; Johnson, Christopher R.; Kaufman, Arie E.; Hagen, Hans %P 45 - 50 %I Springer %C London %@ 978-1-4471-6496-8
Weinkauf, T. 2014b. Differential Descriptions for Characteristic Curves and their Possible Role in Flow Analysis. Mixing, Transport and Coherent Structures, MFO.
Export
BibTeX
@inproceedings{weinkauf14a, TITLE = {Differential Descriptions for Characteristic Curves and their Possible Role in Flow Analysis}, AUTHOR = {Weinkauf, Tino}, LANGUAGE = {eng}, DOI = {10.4171/OWR/2014/04}, PUBLISHER = {MFO}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {Mixing, Transport and Coherent Structures}, EDITOR = {Balasuriya, Sanjeeva and Haller, George and Ouellette, Nicholas and Rom-Kedar, Vered}, PAGES = {241--243}, SERIES = {Mathematisches Forschungsinstitut Oberwolfach Report}, VOLUME = {04/2014}, ADDRESS = {Oberwolfach, Germany}, }
Endnote
%0 Conference Proceedings %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Differential Descriptions for Characteristic Curves and their Possible Role in Flow Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5309-B %D 2014 %B Mixing, Transport and Coherent Structures Workshop %Z date of event: 2014-01-26 - 2014-02-01 %C Oberwolfach, Germany %B Mixing, Transport and Coherent Structures %E Balasuriya, Sanjeeva; Haller, George; Ouellette, Nicholas; Rom-Kedar, Vered %P 241 - 243 %I MFO %R 10.4171/OWR/2014/04 %B Mathematisches Forschungsinstitut Oberwolfach Report %N 04/2014
Wu, C., Zollhöfer, M., Nießner, M., Stamminger, M., Izadi, S., and Theobalt, C. 2014a. Real-time Shading-based Refinement for Consumer Depth Cameras. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2014) 33, 6.
Export
BibTeX
@article{Wu:2014:RSR:2661229.2661232, TITLE = {Real-time Shading-based Refinement for Consumer Depth Cameras}, AUTHOR = {Wu, Chenglei and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Stamminger, Marc and Izadi, Shahram and Theobalt, Christian}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2661229.2661232}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {33}, NUMBER = {6}, PAGES = {1--10}, EID = {200}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2014}, }
Endnote
%0 Journal Article %A Wu, Chenglei %A Zollh&#246;fer, Michael %A Nie&#223;ner, Matthias %A Stamminger, Marc %A Izadi, Shahram %A Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Shading-based Refinement for Consumer Depth Cameras : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF75-F %R 10.1145/2661229.2661232 %7 2014 %D 2014 %K depth camera, real-time, shading-based refinement %J ACM Transactions on Graphics %O TOG %V 33 %N 6 %& 1 %P 1 - 10 %Z sequence number: 200 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2014 %O ACM SIGGRAPH Asia 2014 Shenzhen, China
Wu, C. 2014. Inverse Rendering for Scene Reconstruction in General Environments. urn:nbn:de:bsz:291-scidok-58326.
Export
BibTeX
@phdthesis{WuPhD2014, TITLE = {Inverse Rendering for Scene Reconstruction in General Environments}, AUTHOR = {Wu, Chenglei}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58326}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, }
Endnote
%0 Thesis %A Wu, Chenglei %A referee: Seidel, Hans-Peter %Y Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Inverse Rendering for Scene Reconstruction in General Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-34B7-6 %U urn:nbn:de:bsz:291-scidok-58326 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P XVI, 184 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5832/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
Wu, X., Wand, M., Hildebrandt, K., Kohli, P., and Seidel, H.-P. 2014b. Real-time Symmetry-preserving Deformation. Computer Graphics Forum (Proc. Pacific Graphics 2014) 33, 7.
Export
BibTeX
@article{Wu2014, TITLE = {Real-time Symmetry-preserving Deformation}, AUTHOR = {Wu, Xiaokun and Wand, Michael and Hildebrandt, Klaus and Kohli, Pushmeet and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12491}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {33}, NUMBER = {7}, PAGES = {229--238}, BOOKTITLE = {22nd Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2014)}, }
Endnote
%0 Journal Article %A Wu, Xiaokun %A Wand, Michael %A Hildebrandt, Klaus %A Kohli, Pushmeet %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Symmetry-preserving Deformation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D08-5 %R 10.1111/cgf.12491 %7 2014-10-28 %D 2014 %J Computer Graphics Forum %V 33 %N 7 %& 229 %P 229 - 238 %I Wiley-Blackwell %C Oxford %B 22nd Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2014 PG 2014 8 to 10 Oct 2014, Seoul, South Korea
Wu, X., Li, C., Wand, M., Hildebrandt, K., Jansen, S., and Seidel, H.-P. 2014c. 3D Model Retargeting Using Offset Statistics. 2nd International Conference on 3D Vision, IEEE.
Export
BibTeX
@inproceedings{Wu2014a, TITLE = {{3D} Model Retargeting Using Offset Statistics}, AUTHOR = {Wu, Xiaokun and Li, Chuan and Wand, Michael and Hildebrandt, Klaus and Jansen, Silke and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4799-7000-1}, PUBLISHER = {IEEE}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, BOOKTITLE = {2nd International Conference on 3D Vision}, PAGES = {353--360}, ADDRESS = {Tokyo, Japan}, }
Endnote
%0 Conference Proceedings %A Wu, Xiaokun %A Li, Chuan %A Wand, Michael %A Hildebrandt, Klaus %A Jansen, Silke %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T 3D Model Retargeting Using Offset Statistics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D63-D %D 2014 %B 2nd International Conference on 3D Vision %Z date of event: 2014-12-08 - 2014-12-11 %C Tokyo, Japan %B 2nd International Conference on 3D Vision %P 353 - 360 %I IEEE %@ 978-1-4799-7000-1
Zeltwanger, M. 2014. On the Combination of KLT Tracking and SIFT Matching. .
Abstract
Finding correspondences is a crucial aspect in many fields of computer vision and computer graphics such as structure from motion, camera motion estimation and 3D reconstruction. Current feature point detection and motion tracking algorithms provide accurate correspondences for a sequence of images. However, if the corresponding 3D point of some feature track is occluded, leaves the image or is rejected for some other reason, the feature track is dropped. If the point reappears in some later image, a new track is started without knowing of the existence of the old track, thus losing important information about the scene and the motion of the point. There exists no single algorithm that allows to track feature points in a short range as well as long range.\\ We propose an algorithm that takes advantage of both, optic flow based feature point tracker and descriptor based long range matching. While the feature point tracker provides accurate feature tracks, we use descriptor based matching to combine new tracks with already existing tracks, thereby reducing the number of feature tracks covering the whole scene while increasing the number of feature points per track. Feature tracks are represented by a minimal amount of feature descriptors that describe the feature track best.
Export
BibTeX
@mastersthesis{Zeltwanger2013, TITLE = {On the Combination of {KLT} Tracking and {SIFT} Matching}, AUTHOR = {Zeltwanger, Marco}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, ABSTRACT = {Finding correspondences is a crucial aspect in many fields of computer vision and computer graphics such as structure from motion, camera motion estimation and 3D reconstruction. Current feature point detection and motion tracking algorithms provide accurate correspondences for a sequence of images. However, if the corresponding 3D point of some feature track is occluded, leaves the image or is rejected for some other reason, the feature track is dropped. If the point reappears in some later image, a new track is started without knowing of the existence of the old track, thus losing important information about the scene and the motion of the point. There exists no single algorithm that allows to track feature points in a short range as well as long range.\\ We propose an algorithm that takes advantage of both, optic flow based feature point tracker and descriptor based long range matching. While the feature point tracker provides accurate feature tracks, we use descriptor based matching to combine new tracks with already existing tracks, thereby reducing the number of feature tracks covering the whole scene while increasing the number of feature points per track. Feature tracks are represented by a minimal amount of feature descriptors that describe the feature track best.}, }
Endnote
%0 Thesis %A Zeltwanger, Marco %+ Computer Graphics, MPI for Informatics, Max Planck Society %T On the Combination of KLT Tracking and SIFT Matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-34C3-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master %X Finding correspondences is a crucial aspect in many fields of computer vision and computer graphics such as structure from motion, camera motion estimation and 3D reconstruction. Current feature point detection and motion tracking algorithms provide accurate correspondences for a sequence of images. However, if the corresponding 3D point of some feature track is occluded, leaves the image or is rejected for some other reason, the feature track is dropped. If the point reappears in some later image, a new track is started without knowing of the existence of the old track, thus losing important information about the scene and the motion of the point. There exists no single algorithm that allows to track feature points in a short range as well as long range.\\ We propose an algorithm that takes advantage of both, optic flow based feature point tracker and descriptor based long range matching. While the feature point tracker provides accurate feature tracks, we use descriptor based matching to combine new tracks with already existing tracks, thereby reducing the number of feature tracks covering the whole scene while increasing the number of feature points per track. Feature tracks are represented by a minimal amount of feature descriptors that describe the feature track best.
Zollhöfer, M., Nießner, M., Izadi, S., et al. 2014. Real-time Non-rigid Reconstruction Using an RGB-D Camera. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2014) 33, 4.
Export
BibTeX
@article{Zollhofer:2014:RNR:2601097.2601165, TITLE = {Real-time Non-rigid Reconstruction Using an {RGB-D} Camera}, AUTHOR = {Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Izadi, Shahram and Rehmann, Christoph and Zach, Christopher and Fisher, Matthew and Wu, Chenglei and Fitzgibbon, Andrew and Loop, Charles and Theobalt, Christian and Stamminger, Marc}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2601097.2601165}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, MARGINALMARK = {$\bullet$}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {33}, NUMBER = {4}, PAGES = {1--12}, EID = {156}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2014}, }
Endnote
%0 Journal Article %A Zollh&#246;fer, Michael %A Nie&#223;ner, Matthias %A Izadi, Shahram %A Rehmann, Christoph %A Zach, Christopher %A Fisher, Matthew %A Wu, Chenglei %A Fitzgibbon, Andrew %A Loop, Charles %A Theobalt, Christian %A Stamminger, Marc %+ External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Real-time Non-rigid Reconstruction Using an RGB-D Camera : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-AF73-4 %R 10.1145/2601097.2601165 %7 2014 %D 2014 %K 3D scanning, deformation, depth camera, non-rigid, shape, stereo matching, surface reconstruction %J ACM Transactions on Graphics %O TOG %V 33 %N 4 %& 1 %P 1 - 12 %Z sequence number: 156 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2014 %O ACM SIGGRAPH 2014 Vancouver, BC, Canada
2013
Afsari Yeganeh, E. 2013. Human Motion Alignment Using a Depth Camera. .
Export
BibTeX
@mastersthesis{Master2013:Elham, TITLE = {Human Motion Alignment Using a Depth Camera}, AUTHOR = {Afsari Yeganeh, Elham}, LANGUAGE = {eng}, LOCALID = {Local-ID: 9D35149C077B583BC1257BA20027DAB1-Master2013:Elham}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, MARGINALMARK = {$\bullet$}, DATE = {2013}, }
Endnote
%0 Thesis %A Afsari Yeganeh, Elham %Y Theobalt, Christian %A referee: Oulasvirta, Antti %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Human Motion Alignment Using a Depth Camera : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1740-2 %F OTHER: Local-ID: 9D35149C077B583BC1257BA20027DAB1-Master2013:Elham %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master
Amor, M., Doallo, R., Fraguela, B.B., Herrero, J.R., Quintana-Orti, E.S., and Strzodka, R. 2013. Graphics Processing Unit Computing and Exploitation of Hardware Accelerators. Concurrency and Computation : Practice and Experience 25, 8.
Export
BibTeX
@article{Strzodka2013, TITLE = {Graphics Processing Unit Computing and Exploitation of Hardware Accelerators}, AUTHOR = {Amor, Margarita and Doallo, Ramon and Fraguela, Basilico B. and Herrero, Jose R. and Quintana-Orti, Enrique S. and Strzodka, Robert}, LANGUAGE = {eng}, ISSN = {1532-0634}, DOI = {10.1002/cpe.2967}, PUBLISHER = {Wiley}, ADDRESS = {Chichester}, YEAR = {2013}, MARGINALMARK = {$\bullet$}, DATE = {2013}, JOURNAL = {Concurrency and Computation : Practice and Experience}, VOLUME = {25}, NUMBER = {8}, PAGES = {1104--1106}, JOURNAL = {Special Issue: Combined Special Issues on Parallel Architectures and Bioinspired Algorithms and GPU Computing and Exploitation of Hardware Accelerators}, EDITOR = {Hidalgo, Jose Ignacio and Fern{\'a}ndez-de-Vega, Francisco and Amor, Margarita and Doallo, Ram{\'o}n and Fraguela, Basilio B. and Herrero, Jos{\'e} R. and Quintana-Ort{\'i}, Enrique and Strzodka, Robert}, }
Endnote
%0 Journal Article %A Amor, Margarita %A Doallo, Ramon %A Fraguela, Basilico B. %A Herrero, Jose R. %A Quintana-Orti, Enrique S. %A Strzodka, Robert %+ External Organizations External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Graphics Processing Unit Computing and Exploitation of Hardware Accelerators : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0018-A735-B %R 10.1002/cpe.2967 %7 2013 %D 2013 %J Concurrency and Computation : Practice and Experience %V 25 %N 8 %& 1104 %P 1104 - 1106 %I Wiley %C Chichester %@ false %B Special Issue: Combined Special Issues on Parallel Architectures and Bioinspired Algorithms and GPU Computing and Exploitation of Hardware Accelerators
Bachynskyi, M., Oulasvirta, A., Palmas, G., and Weinkauf, T. 2013. Biomechanical Simulation in the Analysis of Aimed Movements. CHI 2013 Extended Abstracts, ACM.
Abstract
For efficient design of gestural user interfaces both performance and fatigue characteristics of movements must be understood. We are developing a novel method that allows for biomechanical analysis in conjunction with performance analysis. We capture motion data using optical tracking from which we can compute performance measures such as speed and accuracy. The measured motion data also serves as input for a biomechanical simulation using inverse dynamics and static optimization on a full-body skeletal model. The simulation augments the data by biomechanical quantities from which we derive an index of fatigue. We are working on an interactive analysis tool that allows practitioners to identify and compare movements with desirable performance and fatigue properties. We show the applicability of our methodology using a case study of rapid aimed movements to targets covering the 3D movement space uniformly.
Export
BibTeX
@inproceedings{bachynskyi13a, TITLE = {Biomechanical Simulation in the Analysis of Aimed Movements}, AUTHOR = {Bachynskyi, Myroslav and Oulasvirta, Antti and Palmas, Gregorio and Weinkauf, Tino}, LANGUAGE = {eng}, ISBN = {978-1-4503-1952-2}, DOI = {10.1145/2468356.2468406}, LOCALID = {Local-ID: D2FBFC0A6CB98FA9C1257B1000704153-Bachynskyi2012}, PUBLISHER = {ACM}, YEAR = {2013}, MARGINALMARK = {$\bullet$}, DATE = {2013}, ABSTRACT = {For efficient design of gestural user interfaces both performance and fatigue characteristics of movements must be understood. We are developing a novel method that allows for biomechanical analysis in conjunction with performance analysis. We capture motion data using optical tracking from which we can compute performance measures such as speed and accuracy. The measured motion data also serves as input for a biomechanical simulation using inverse dynamics and static optimization on a full-body skeletal model. The simulation augments the data by biomechanical quantities from which we derive an index of fatigue. We are working on an interactive analysis tool that allows practitioners to identify and compare movements with desirable performance and fatigue properties. We show the applicability of our methodology using a case study of rapid aimed movements to targets covering the 3D movement space uniformly.}, BOOKTITLE = {CHI 2013 Extended Abstracts}, EDITOR = {Baudisch, Patrick and Beaudouin-Lafon, Michel and Mackay, Wendy E.}, PAGES = {277--282}, ADDRESS = {Paris, France}, }
Endnote
%0 Conference Proceedings %A Bachynskyi, Myroslav %A Oulasvirta, Antti %A Palmas, Gregorio %A Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Biomechanical Simulation in the Analysis of Aimed Movements : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1745-7 %F OTHER: Local-ID: D2FBFC0A6CB98FA9C1257B1000704153-Bachynskyi2012 %R 10.1145/2468356.2468406 %D 2013 %B The 31st Annual CHI Conference on Human Factors in Computing Systems %Z date of event: 2013-04-27 - 2013-05-02 %C Paris, France %X For efficient design of gestural user interfaces both performance and fatigue characteristics of movements must be understood. We are developing a novel method that allows for biomechanical analysis in conjunction with performance analysis. We capture motion data using optical tracking from which we can compute performance measures such as speed and accuracy. The measured motion data also serves as input for a biomechanical simulation using inverse dynamics and static optimization on a full-body skeletal model. The simulation augments the data by biomechanical quantities from which we derive an index of fatigue. We are working on an interactive analysis tool that allows practitioners to identify and compare movements with desirable performance and fatigue properties. We show the applicability of our methodology using a case study of rapid aimed movements to targets