2017
Adhikarla, V.K., Vinkler, M., Sumin, D., et al. 2017. Towards a Quality Metric for Dense Light Fields. http://arxiv.org/abs/1704.07576.
(arXiv: 1704.07576)
Abstract
Light fields become a popular representation of three dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light- fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics.
Export
BibTeX
@online{AdhikarlaArXiv17, TITLE = {Towards a Quality Metric for Dense Light Fields}, AUTHOR = {Adhikarla, Vamsi Kiran and Vinkler, Marek and Sumin, Denis and Mantiuk, Rafa{\l} K. and Myszkowski, Karol and Seidel, Hans-Peter and Didyk, Piotr}, URL = {http://arxiv.org/abs/1704.07576}, EPRINT = {1704.07576}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Light fields become a popular representation of three dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light- fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics.}, }
Endnote
%0 Report %A Adhikarla, Vamsi Kiran %A Vinkler, Marek %A Sumin, Denis %A Mantiuk, Rafa&#322; K. %A Myszkowski, Karol %A Seidel, Hans-Peter %A Didyk, Piotr %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Towards a Quality Metric for Dense Light Fields : %U http://hdl.handle.net/11858/00-001M-0000-002D-2C2C-1 %U http://arxiv.org/abs/1704.07576 %D 2017 %X Light fields become a popular representation of three dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light- fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics. %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Arabadzhiyska, E., Tursun, O.T., Myszkowski, K., Seidel, H.-P., and Didyk, P. Saccade Landing Position Prediction for Gaze-Contingent Rendering. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2017) 36, 4.
(Accepted/in press)
Export
BibTeX
@article{ArabadzhiyskaSIGGRAPH2017, TITLE = {Saccade Landing Position Prediction for Gaze-Contingent Rendering}, AUTHOR = {Arabadzhiyska, Elena and Tursun, Okan Tarhan and Myszkowski, Karol and Seidel, Hans-Peter and Didyk, Piotr}, LANGUAGE = {eng}, ISSN = {0730-0301}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2017}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {36}, NUMBER = {4}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2017}, }
Endnote
%0 Journal Article %A Arabadzhiyska, Elena %A Tursun, Okan Tarhan %A Myszkowski, Karol %A Seidel, Hans-Peter %A Didyk, Piotr %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Saccade Landing Position Prediction for Gaze-Contingent Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-7D82-9 %D 2017 %J ACM Transactions on Graphics %V 36 %N 4 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2017 %O ACM SIGGRAPH 2017 Los Angeles, California, 30 July - 3 August
Dunn, D., Tippets, C., Torell, K., et al. 2017. Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors. IEEE Transactions on Visualization and Computer Graphics (Proc. VR 2017) 23, 4.
Export
BibTeX
@article{DunnVR2017, TITLE = {Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors}, AUTHOR = {Dunn, David and Tippets, Cary and Torell, Kent and Kellnhofer, Petr and Ak{\c s}it, Kaan and Didyk, Piotr and Myszkowski, Karol and Luebke, David and Fuchs, Henry}, LANGUAGE = {eng}, ISSN = {1077-2626}, DOI = {10.1109/TVCG.2017.2657058}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics (Proc. VR)}, VOLUME = {23}, NUMBER = {4}, PAGES = {1322--1331}, BOOKTITLE = {Selected Proceedings IEEE Virtual Reality 2017 (VR 2017)}, }
Endnote
%0 Journal Article %A Dunn, David %A Tippets, Cary %A Torell, Kent %A Kellnhofer, Petr %A Ak&#351;it, Kaan %A Didyk, Piotr %A Myszkowski, Karol %A Luebke, David %A Fuchs, Henry %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-3095-4 %R 10.1109/TVCG.2017.2657058 %7 2017 %D 2017 %J IEEE Transactions on Visualization and Computer Graphics %V 23 %N 4 %& 1322 %P 1322 - 1331 %I IEEE Computer Society %C New York, NY %@ false %B Selected Proceedings IEEE Virtual Reality 2017 %O VR 2017 Los Angeles, California on March 18-22, 2017 %U http://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/
Weier, M., Stengel, M., Roth, T., et al. 2017. Perception-driven Accelerated Rendering. Computer Graphics Forum (Proc. EUROGRAPHICS 2017) 36, 2.
Export
BibTeX
@article{WeierEG2017STAR, TITLE = {Perception-driven Accelerated Rendering}, AUTHOR = {Weier, Martin and Stengel, Michael and Roth, Thorsten and Didyk, Piotr and Eisemann, Elmar and Eisemann, Martin and Grogorick, Steve and Hinkenjann, Andr{\'e} and Krujiff, Elmar and Magnor, Marcus A. and Myszkowski, Karol and Slusallek, Philipp}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.13150}, PUBLISHER = {Blackwell-Wiley}, ADDRESS = {Oxford}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {36}, NUMBER = {2}, PAGES = {611--643}, BOOKTITLE = {EUROGRAPHICS 2017 -- State of the Art Reports}, }
Endnote
%0 Journal Article %A Weier, Martin %A Stengel, Michael %A Roth, Thorsten %A Didyk, Piotr %A Eisemann, Elmar %A Eisemann, Martin %A Grogorick, Steve %A Hinkenjann, Andr&#233; %A Krujiff, Elmar %A Magnor, Marcus A. %A Myszkowski, Karol %A Slusallek, Philipp %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Perception-driven Accelerated Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-3496-8 %R 10.1111/cgf.13150 %7 2017 %D 2017 %J Computer Graphics Forum %V 36 %N 2 %& 611 %P 611 - 643 %I Blackwell-Wiley %C Oxford %@ false %B EUROGRAPHICS 2017 - State of the Art Reports %O EUROGRAPHICS 2017 EUROGRAPHICS 2017 - STAR EG 2017 Lyon, France, 24-28 April 2017
2016
Dąbała, Ł., Ziegler, M., Didyk, P., et al. 2016. Efficient Multi-image Correspondences for On-line Light Field Video Processing. Computer Graphics Forum (Proc. Pacific Graphics 2016) 35, 7.
Export
BibTeX
@article{DabalaPG2016, TITLE = {Efficient Multi-image Correspondences for On-line Light Field Video Processing}, AUTHOR = {D{\c a}ba{\l}a, {\L}ukasz and Ziegler, Matthias and Didyk, Piotr and Zilly, Frederik and Keinert, Joachim and Myszkowski, Karol and Rokita, Przemyslaw and Ritschel, Tobias}, LANGUAGE = {eng}, ISSN = {1467-8659}, DOI = {10.1111/cgf.13037}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {35}, NUMBER = {7}, PAGES = {401--410}, BOOKTITLE = {The 24th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2016)}, }
Endnote
%0 Journal Article %A D&#261;ba&#322;a, &#321;ukasz %A Ziegler, Matthias %A Didyk, Piotr %A Zilly, Frederik %A Keinert, Joachim %A Myszkowski, Karol %A Rokita, Przemyslaw %A Ritschel, Tobias %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Multi-image Correspondences for On-line Light Field Video Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82BA-5 %R 10.1111/cgf.13037 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 7 %& 401 %P 401 - 410 %I Wiley-Blackwell %C Oxford, UK %@ false %B The 24th Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2016 PG 2016
Gryaditskaya, Y., Masia, B., Didyk, P., Myszkowski, K., and Seidel, H.-P. 2016. Gloss Editing in Light Fields. VMV 2016 Vision, Modeling and Visualization, Eurographics Association.
Export
BibTeX
@inproceedings{jgryadit2016, TITLE = {Gloss Editing in Light Fields}, AUTHOR = {Gryaditskaya, Yulia and Masia, Belen and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-03868-025-3}, DOI = {10.2312/vmv.20161351}, PUBLISHER = {Eurographics Association}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {VMV 2016 Vision, Modeling and Visualization}, EDITOR = {Hullin, Matthias and Stamminger, Marc and Weinkauf, Tino}, PAGES = {127--135}, ADDRESS = {Bayreuth, Germany}, }
Endnote
%0 Conference Proceedings %A Gryaditskaya, Yulia %A Masia, Belen %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Gloss Editing in Light Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82C5-B %R 10.2312/vmv.20161351 %D 2016 %B 21st International Symposium on Vision, Modeling and Visualization %Z date of event: 2016-10-10 - 2016-10-12 %C Bayreuth, Germany %B VMV 2016 Vision, Modeling and Visualization %E Hullin, Matthias; Stamminger, Marc; Weinkauf, Tino %P 127 - 135 %I Eurographics Association %@ 978-3-03868-025-3
Havran, V., Filip, J., and Myszkowski, K. 2016. Perceptually Motivated BRDF Comparison using Single Image. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2016) 35, 4.
Export
BibTeX
@article{havran2016perceptually, TITLE = {Perceptually Motivated {BRDF} Comparison using Single Image}, AUTHOR = {Havran, Vlastimil and Filip, Jiri and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12944}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {35}, NUMBER = {4}, PAGES = {1--12}, BOOKTITLE = {Eurographics Symposium on Rendering 2016}, EDITOR = {Eisemann, Elmar and Fiume, Eugene}, }
Endnote
%0 Journal Article %A Havran, Vlastimil %A Filip, Jiri %A Myszkowski, Karol %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually Motivated BRDF Comparison using Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82C0-6 %R 10.1111/cgf.12944 %7 2016 %D 2016 %J Computer Graphics Forum %V 35 %N 4 %& 1 %P 1 - 12 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2016 %O Eurographics Symposium on Rendering 2016 EGSR 2016 Dublin, Ireland, 22-24 June 2016
Kellnhofer, P., Didyk, P., Myszkowski, K., Hefeeda, M.M., Seidel, H.-P., and Matusik, W. 2016a. GazeStereo3D: Seamless Disparity Manipulations. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{KellnhoferSIGGRAPH2016, TITLE = {{GazeStereo3D}: {S}eamless Disparity Manipulations}, AUTHOR = {Kellnhofer, Petr and Didyk, Piotr and Myszkowski, Karol and Hefeeda, Mohamed M. and Seidel, Hans-Peter and Matusik, Wojciech}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925866}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {68}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Didyk, Piotr %A Myszkowski, Karol %A Hefeeda, Mohamed M. %A Seidel, Hans-Peter %A Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T GazeStereo3D: Seamless Disparity Manipulations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0190-4 %R 10.1145/2897824.2925866 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 68 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
Kellnhofer, P., Didyk, P., Ritschel, T., Masia, B., Myszkowski, K., and Seidel, H.-P. 2016b. Motion Parallax in Stereo 3D: Model and Applications. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{Kellnhofer2016SGA, TITLE = {Motion Parallax in Stereo {3D}: {M}odel and Applications}, AUTHOR = {Kellnhofer, Petr and Didyk, Piotr and Ritschel, Tobias and Masia, Belen and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2980179.2980230}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {176}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Didyk, Piotr %A Ritschel, Tobias %A Masia, Belen %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Motion Parallax in Stereo 3D: Model and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B6-D %R 10.1145/2980179.2980230 %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 176 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2016c. Transformation-aware Perceptual Image Metric. Journal of Electronic Imaging 25, 5.
Export
BibTeX
@article{Kellnhofer2016jei, TITLE = {Transformation-aware Perceptual Image Metric}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1017-9909}, DOI = {10.1117/1.JEI.25.5.053014}, PUBLISHER = {SPIE}, ADDRESS = {Bellingham, WA}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {Journal of Electronic Imaging}, VOLUME = {25}, NUMBER = {5}, EID = {053014}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Transformation-aware Perceptual Image Metric : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B3-4 %R 10.1117/1.JEI.25.5.053014 %7 2016 %D 2016 %J Journal of Electronic Imaging %V 25 %N 5 %Z sequence number: 053014 %I SPIE %C Bellingham, WA %@ false
Lavoué, G., Liu, H., Myszkowski, K., and Lin, W. 2016. Quality Assessment and Perception in Computer Graphics. IEEE Computer Graphics and Applications 36, 4.
Export
BibTeX
@article{Lavoue2016, TITLE = {Quality Assessment and Perception in Computer Graphics}, AUTHOR = {Lavou{\'e}, Guillaume and Liu, Hantao and Myszkowski, Karol and Lin, Weisi}, LANGUAGE = {eng}, ISSN = {0272-1716}, DOI = {10.1109/MCG.2016.72}, PUBLISHER = {IEEE Computer Society}, ADDRESS = {Los Alamitos, CA}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {IEEE Computer Graphics and Applications}, VOLUME = {36}, NUMBER = {4}, PAGES = {21--22}, }
Endnote
%0 Journal Article %A Lavou&#233;, Guillaume %A Liu, Hantao %A Myszkowski, Karol %A Lin, Weisi %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Quality Assessment and Perception in Computer Graphics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8411-2 %R 10.1109/MCG.2016.72 %7 2016-07-29 %D 2016 %J IEEE Computer Graphics and Applications %V 36 %N 4 %& 21 %P 21 - 22 %I IEEE Computer Society %C Los Alamitos, CA %@ false
Leimkühler, T., Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2016. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion. Graphics Interface 2016, 42nd Graphics Interface Conference, Canadian Information Processing Society.
Export
BibTeX
@inproceedings{LeimkuehlerGI2016, TITLE = {Perceptual real-time {2D}-to-{3D} conversion using cue fusion}, AUTHOR = {Leimk{\"u}hler, Thomas and Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-0-9947868-1-4}, DOI = {10.20380/GI2016.02}, PUBLISHER = {Canadian Information Processing Society}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {Graphics Interface 2016, 42nd Graphics Interface Conference}, EDITOR = {Popa, Tiberiu and Moffatt, Karyn}, PAGES = {5--12}, ADDRESS = {Victoria, Canada}, }
Endnote
%0 Conference Proceedings %A Leimk&#252;hler, Thomas %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-823D-1 %R 10.20380/GI2016.02 %D 2016 %B 42nd Graphics Interface Conference %Z date of event: 2016-06-01 - 2016-06-03 %C Victoria, Canada %B Graphics Interface 2016 %E Popa, Tiberiu; Moffatt, Karyn %P 5 - 12 %I Canadian Information Processing Society %@ 978-0-9947868-1-4
Mantiuk, R.K. and Myszkowski, K. 2016. Perception-Inspired High Dynamic Range Video Coding and Compression. In: CHIPS 2020 VOL. 2. Springer, New York, NY.
Export
BibTeX
@incollection{Mantiuk_Chips2020, TITLE = {Perception-Inspired High Dynamic Range Video Coding and Compression}, AUTHOR = {Mantiuk, Rafa{\l} K. and Myszkowski, Karol}, LANGUAGE = {eng}, ISBN = {978-3-319-22092-5}, DOI = {10.1007/978-3-319-22093-2_14}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {CHIPS 2020 VOL. 2}, EDITOR = {Hoefflinger, Bernd}, PAGES = {211--220}, SERIES = {The Frontiers Collection}, }
Endnote
%0 Book Section %A Mantiuk, Rafa&#322; K. %A Myszkowski, Karol %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Perception-Inspired High Dynamic Range Video Coding and Compression : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-2DE8-3 %R 10.1007/978-3-319-22093-2_14 %D 2016 %B CHIPS 2020 VOL. 2 %E Hoefflinger, Bernd %P 211 - 220 %I Springer %C New York, NY %@ 978-3-319-22092-5 %S The Frontiers Collection
Serrano, A., Gutierrez, D., Myszkowski, K., Seidel, H.-P., and Masia, B. 2016a. An Intuitive Control Space for Material Appearance. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016) 35, 6.
Export
BibTeX
@article{Serrano_MaterialAppearance_2016, TITLE = {An Intuitive Control Space for Material Appearance}, AUTHOR = {Serrano, Ana and Gutierrez, Diego and Myszkowski, Karol and Seidel, Hans-Peter and Masia, Belen}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2980179.2980242}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {35}, NUMBER = {6}, EID = {186}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2016}, }
Endnote
%0 Journal Article %A Serrano, Ana %A Gutierrez, Diego %A Myszkowski, Karol %A Seidel, Hans-Peter %A Masia, Belen %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T An Intuitive Control Space for Material Appearance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-82B8-9 %R 10.1145/2980179.2980242 %7 2016 %D 2016 %J ACM Transactions on Graphics %O TOG %V 35 %N 6 %Z sequence number: 186 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2016 %O ACM SIGGRAPH Asia 2016
Serrano, A., Gutierrez, D., Myszkowski, K., Seidel, H.-P., and Masia, B. 2016b. Intuitive Editing of Material Appearance. ACM SIGGRAPH 2016 Posters.
Export
BibTeX
@inproceedings{SerranoSIGGRAPH2016, TITLE = {Intuitive Editing of Material Appearance}, AUTHOR = {Serrano, Ana and Gutierrez, Diego and Myszkowski, Karol and Seidel, Hans-Peter and Masia, Belen}, LANGUAGE = {eng}, ISBN = {978-1-4503-4371-8}, DOI = {10.1145/2945078.2945141}, PUBLISHER = {ACM}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, BOOKTITLE = {ACM SIGGRAPH 2016 Posters}, PAGES = {1--2}, EID = {63}, ADDRESS = {Anaheim, CA, USA}, }
Endnote
%0 Generic %A Serrano, Ana %A Gutierrez, Diego %A Myszkowski, Karol %A Seidel, Hans-Peter %A Masia, Belen %+ External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Intuitive Editing of Material Appearance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0170-C %R 10.1145/2945078.2945141 %D 2016 %Z name of event: the 43rd International Conference and Exhibition on Computer Graphics & Interactive Techniques %Z date of event: 2016-07-24 - 2016-07-28 %Z place of event: Anaheim, CA, USA %B ACM SIGGRAPH 2016 Posters %P 1 - 2 %Z sequence number: 63 %@ 978-1-4503-4371-8
Templin, K., Didyk, P., Myszkowski, K., and Seidel, H.-P. 2016. Emulating Displays with Continuously Varying Frame Rates. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2016) 35, 4.
Export
BibTeX
@article{TemplinSIGGRAPH2016, TITLE = {Emulating Displays with Continuously Varying Frame Rates}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2897824.2925879}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {35}, NUMBER = {4}, EID = {67}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2016}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Emulating Displays with Continuously Varying Frame Rates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-018D-E %R 10.1145/2897824.2925879 %7 2016 %D 2016 %J ACM Transactions on Graphics %V 35 %N 4 %Z sequence number: 67 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2016 %O ACM SIGGRAPH 2016 Anaheim, California
2015
Arpa, S., Ritschel, T., Myszkowski, K., Çapin, T., and Seidel, H.-P. 2015. Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image. Computer Graphics Forum 34, 1.
Export
BibTeX
@article{arpa2014purkinje, TITLE = {Purkinje Images: {Conveying} Different Content for Different Luminance Adaptations in a Single Image}, AUTHOR = {Arpa, Sami and Ritschel, Tobias and Myszkowski, Karol and {\c C}apin, Tolga and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12463}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum}, VOLUME = {34}, NUMBER = {1}, PAGES = {116--126}, }
Endnote
%0 Journal Article %A Arpa, Sami %A Ritschel, Tobias %A Myszkowski, Karol %A &#199;apin, Tolga %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D0B-6 %R 10.1111/cgf.12463 %7 2014-10-18 %D 2015 %J Computer Graphics Forum %V 34 %N 1 %& 116 %P 116 - 126 %I Wiley-Blackwell %C Oxford
Gryaditskaya, Y., Pouli, T., Reinhard, E., Myszkowski, K., and Seidel, H.-P. 2015. Motion Aware Exposure Bracketing for HDR Video. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2015) 34, 4.
Export
BibTeX
@article{Gryaditskaya2015, TITLE = {Motion Aware Exposure Bracketing for {HDR} Video}, AUTHOR = {Gryaditskaya, Yulia and Pouli, Tania and Reinhard, Erik and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12684}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {34}, NUMBER = {4}, PAGES = {119--130}, BOOKTITLE = {Eurographics Symposium on Rendering 2015}, EDITOR = {Lehtinen, Jaakko and Nowrouzezahra, Derek}, }
Endnote
%0 Journal Article %A Gryaditskaya, Yulia %A Pouli, Tania %A Reinhard, Erik %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Motion Aware Exposure Bracketing for HDR Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-15D2-B %R 10.1111/cgf.12684 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 4 %& 119 %P 119 - 130 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2015 %O Eurographics Symposium on Rendering 2015 EGSR 2015 Darmstadt, Germany, June 24th - 26th, 2015
Kellnhofer, P., Ritschel, T., Myszkowski, K., Eisemann, E., and Seidel, H.-P. 2015a. Modeling Luminance Perception at Absolute Threshold. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2015) 34, 4.
Export
BibTeX
@article{Kellnhofer2015a, TITLE = {Modeling Luminance Perception at Absolute Threshold}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Eisemann, Elmar and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12687}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {34}, NUMBER = {4}, PAGES = {155--164}, BOOKTITLE = {Eurographics Symposium on Rendering 2014}, EDITOR = {Lehtinen, Jaakko and Nowrouzezahra, Derek}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Eisemann, Elmar %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Modeling Luminance Perception at Absolute Threshold : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-8E8D-4 %R 10.1111/cgf.12687 %7 2015 %D 2015 %J Computer Graphics Forum %V 34 %N 4 %& 155 %P 155 - 164 %I Wiley-Blackwell %C Oxford %@ false %B Eurographics Symposium on Rendering 2014 %O Eurographics Symposium on Rendering 2015 EGSR 2015 Darmstadt, Germany, June 24th - 26th, 2015
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2015b. A Transformation-aware Perceptual Image Metric. Human Vision and Electronic Imaging XX (HVEI 2015), SPIE/IS&T.
Abstract
Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations.
Export
BibTeX
@inproceedings{Kellnhofer2015, TITLE = {A Transformation-aware Perceptual Image Metric}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {9781628414844}, DOI = {10.1117/12.2076754}, PUBLISHER = {SPIE/IS\&T}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, ABSTRACT = {Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations.}, BOOKTITLE = {Human Vision and Electronic Imaging XX (HVEI 2015)}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and de Ridder, Huib}, EID = {939408}, SERIES = {Proceedings of SPIE}, VOLUME = {9394}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Transformation-aware Perceptual Image Metric : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-544A-4 %R 10.1117/12.2076754 %D 2015 %B Human Vision and Electronic Imaging XX %Z date of event: 2015-02-08 - 2015-02-12 %C San Francisco, CA, USA %X Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations. %B Human Vision and Electronic Imaging XX %E Rogowitz, Bernice E.; Pappas, Thrasyvoulos N.; de Ridder, Huib %Z sequence number: 939408 %I SPIE/IS&T %@ 9781628414844 %B Proceedings of SPIE %N 9394
Kellnhofer, P., Leimkühler, T., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2015c. What Makes 2D-to-3D Stereo Conversion Perceptually Plausible? Proceedings SAP 2015, ACM.
Export
BibTeX
@inproceedings{Kellnhofer2015SAP, TITLE = {What Makes {2D}-to-{3D} Stereo Conversion Perceptually Plausible?}, AUTHOR = {Kellnhofer, Petr and Leimk{\"u}hler, Thomas and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, ISBN = {978-1-4503-3812-7}, DOI = {10.1145/2804408.2804409}, PUBLISHER = {ACM}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, BOOKTITLE = {Proceedings SAP 2015}, PAGES = {59--66}, ADDRESS = {T{\"u}bingen, Germany}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Leimk&#252;hler, Thomas %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T What Makes 2D-to-3D Stereo Conversion Perceptually Plausible? : %U http://hdl.handle.net/11858/00-001M-0000-0029-2460-7 %R 10.1145/2804408.2804409 %D 2015 %B ACM SIGGRAPH Symposium on Applied Perception %Z date of event: 2015-09-13 - 2015-09-14 %C T&#252;bingen, Germany %B Proceedings SAP 2015 %P 59 - 66 %I ACM %@ 978-1-4503-3812-7 %U http://resources.mpi-inf.mpg.de/StereoCueFusion/WhatMakes3D/
Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2015. High Dynamic Range Imaging. In: Wiley Encyclopedia of Electrical and Electronics Engineering. Wiley, New York, NY.
Export
BibTeX
@incollection{MantiukEncyclopedia2015, TITLE = {High Dynamic Range Imaging}, AUTHOR = {Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1002/047134608X.W8265}, PUBLISHER = {Wiley}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Wiley Encyclopedia of Electrical and Electronics Engineering}, EDITOR = {Webster, John G.}, PAGES = {1--42}, }
Endnote
%0 Book Section %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-A376-B %R 10.1002/047134608X.W8265 %D 2015 %8 15.06.2015 %B Wiley Encyclopedia of Electrical and Electronics Engineering %E Webster, John G. %P 1 - 42 %I Wiley %C New York, NY
Vangorp, P., Myszkowski, K., Graf, E., and Mantiuk, R. 2015a. An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation). Perception (Proc. ECVP 2015) 44, S1.
Export
BibTeX
@article{VangeropECVP2015, TITLE = {An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation)}, AUTHOR = {Vangorp, Peter and Myszkowski, Karol and Graf, Erich and Mantiuk, Rafa{\l}}, LANGUAGE = {eng}, ISSN = {0301-0066}, DOI = {10.1177/0301006615598674}, PUBLISHER = {SAGE}, ADDRESS = {London}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015-08}, JOURNAL = {Perception (Proc. ECVP)}, VOLUME = {44}, NUMBER = {S1}, PAGES = {98--98}, EID = {1T3C001}, BOOKTITLE = {38th European Conference on Visual Perception (ECVP 2015)}, }
Endnote
%0 Journal Article %A Vangorp, Peter %A Myszkowski, Karol %A Graf, Erich %A Mantiuk, Rafa&#322; %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T An Empirical Model for Local Luminance Adaptation in the Fovea (Oral Presentation) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-245C-4 %R 10.1177/0301006615598674 %7 2015 %D 2015 %J Perception %V 44 %N S1 %& 98 %P 98 - 98 %Z sequence number: 1T3C001 %I SAGE %C London %@ false %B 38th European Conference on Visual Perception %O ECVP 2015 Liverpool
Vangorp, P., Myszkowski, K., Graf, E.W., and Mantiuk, R.K. 2015b. A Model of Local Adaptation. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2015) 34, 6.
Export
BibTeX
@article{Vangorp:2015:LocalAdaptationSIGAsia, TITLE = {A Model of Local Adaptation}, AUTHOR = {Vangorp, Peter and Myszkowski, Karol and Graf, Erich W. and Mantiuk, Rafa{\l} K.}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2816795.2818086}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {34}, NUMBER = {6}, PAGES = {1--13}, EID = {166}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2015}, }
Endnote
%0 Journal Article %A Vangorp, Peter %A Myszkowski, Karol %A Graf, Erich W. %A Mantiuk, Rafa&#322; K. %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Model of Local Adaptation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2455-1 %R 10.1145/2816795.2818086 %7 2015 %D 2015 %J ACM Transactions on Graphics %O TOG %V 34 %N 6 %& 1 %P 1 - 13 %Z sequence number: 166 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2015 %O ACM SIGGRAPH Asia 2015 Kobe, Japan %U http://resources.mpi-inf.mpg.de/LocalAdaptation/
2014
Dabala, L., Kellnhofer, P., Ritschel, T., et al. 2014. Manipulating Refractive and Reflective Binocular Disparity. Computer Graphics Forum (Proc. EUROGRAPHICS 2014) 33, 2.
Abstract
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.
Export
BibTeX
@article{Kellnhofer2014b, TITLE = {Manipulating Refractive and Reflective Binocular Disparity}, AUTHOR = {Dabala, Lukasz and Kellnhofer, Petr and Ritschel, Tobias and Didyk, Piotr and Templin, Krzysztof and Rokita, Przemyslaw and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12290}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {33}, NUMBER = {2}, PAGES = {53--62}, BOOKTITLE = {EUROGRAPHICS 2014}, EDITOR = {L{\'e}vy, Bruno and Kautz, Jan}, }
Endnote
%0 Journal Article %A Dabala, Lukasz %A Kellnhofer, Petr %A Ritschel, Tobias %A Didyk, Piotr %A Templin, Krzysztof %A Rokita, Przemyslaw %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Manipulating Refractive and Reflective Binocular Disparity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-EEF9-6 %R 10.1111/cgf.12290 %7 2014-06-01 %D 2014 %X Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes. %J Computer Graphics Forum %V 33 %N 2 %& 53 %P 53 - 62 %I Wiley-Blackwell %C Oxford, UK %B EUROGRAPHICS 2014 %O The European Association for Computer Graphics 35th Annual Conference ; Strasbourg, France, April 7th &#8211; 11th, 2014 EUROGRAPHICS 2014 EG 2014
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2014a. Improving Perception of Binocular Stereo Motion on 3D Display Devices. Stereoscopic Displays and Applications XXV, SPIE.
Abstract
This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations.
Export
BibTeX
@inproceedings{Kellnhofer2014a, TITLE = {Improving Perception of Binocular Stereo Motion on {3D} Display Devices}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0277-786X}, ISBN = {9780819499288}, DOI = {10.1117/12.2032389}, PUBLISHER = {SPIE}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations.}, BOOKTITLE = {Stereoscopic Displays and Applications XXV}, EDITOR = {Woods, Andrew J. and Holliman, Nicolas S. and Favalora, Gregg E.}, PAGES = {1--11}, EID = {901116}, SERIES = {Proceedings of SPIE-IS\&T Electronic Imaging}, VOLUME = {9011}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Improving Perception of Binocular Stereo Motion on 3D Display Devices : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-318D-7 %R 10.1117/12.2032389 %D 2014 %B Stereoscopic Displays and Applications XXV %Z date of event: 2014-02-03 - 2014-02-05 %C San Francisco, CA, USA %X This paper studies the presentation of moving stereo images on different display devices. We address three representative issues. First, we propose temporal compensation for the Pulfrich effect found when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations. %B Stereoscopic Displays and Applications XXV %E Woods, Andrew J.; Holliman, Nicolas S.; Favalora, Gregg E. %P 1 - 11 %Z sequence number: 901116 %I SPIE %@ 9780819499288 %B Proceedings of SPIE-IS&T Electronic Imaging %N 9011 %@ false
Kellnhofer, P., Ritschel, T., Vangorp, P., Myszkowski, K., and Seidel, H.-P. 2014b. Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision. ACM Transactions on Applied Perception 11, 3.
Export
BibTeX
@article{kellnhofer:2014c:DarkStereo, TITLE = {Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Vangorp, Peter and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1544-3558}, DOI = {10.1145/2644813}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, DATE = {2014}, JOURNAL = {ACM Transactions on Applied Perception}, VOLUME = {11}, NUMBER = {3}, EID = {15}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Vangorp, Peter %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Stereo Day-for-Night: Retargeting Disparity for Scotopic Vision : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EE0E-E %R 10.1145/2644813 %7 2014 %D 2014 %J ACM Transactions on Applied Perception %V 11 %N 3 %Z sequence number: 15 %I ACM %C New York, NY %@ false
Pajak, D., Herzog, R., Mantiuk, R., et al. 2014. Perceptual Depth Compression for Stereo Applications. Computer Graphics Forum (Proc. EUROGRAPHICS 2014) 33, 2.
Abstract
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.
Export
BibTeX
@article{PajakEG2014, TITLE = {Perceptual Depth Compression for Stereo Applications}, AUTHOR = {Pajak, Dawid and Herzog, Robert and Mantiuk, Rados{\l}aw and Didyk, Piotr and Eisemann, Elmar and Myszkowski, Karol and Pulli, Kari}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12293}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {33}, NUMBER = {2}, PAGES = {195--204}, BOOKTITLE = {EUROGRAPHICS 2014}, EDITOR = {L{\'e}vy, Bruno and Kautz, Jan}, }
Endnote
%0 Journal Article %A Pajak, Dawid %A Herzog, Robert %A Mantiuk, Rados&#322;aw %A Didyk, Piotr %A Eisemann, Elmar %A Myszkowski, Karol %A Pulli, Kari %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Perceptual Depth Compression for Stereo Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-3C0C-0 %R 10.1111/cgf.12293 %7 2014-06-01 %D 2014 %X Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes. %J Computer Graphics Forum %V 33 %N 2 %& 195 %P 195 - 204 %I Wiley-Blackwell %C Oxford, UK %B EUROGRAPHICS 2014 %O The European Association for Computer Graphics 35th Annual Conference ; Strasbourg, France, April 7th &#8211; 11th, 2014 EUROGRAPHICS 2014 EG 2014
Templin, K., Didyk, P., Myszkowski, K., Hefeeda, M.M., Seidel, H.-P., and Matusik, W. 2014a. Modeling and Optimizing Eye Vergence Response to Stereoscopic Cuts. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2014) 33, 4.
Export
BibTeX
@article{Templin:2014:MOE:2601097.2601148, TITLE = {Modeling and Optimizing Eye Vergence Response to Stereoscopic Cuts}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Myszkowski, Karol and Hefeeda, Mohamed M. and Seidel, Hans-Peter and Matusik, Wojciech}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2601097.2601148}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2014}, DATE = {2014}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {33}, NUMBER = {4}, PAGES = {1--8}, EID = {145}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2014}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Myszkowski, Karol %A Hefeeda, Mohamed M. %A Seidel, Hans-Peter %A Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Modeling and Optimizing Eye Vergence Response to Stereoscopic Cuts : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EE16-9 %R 10.1145/2601097.2601148 %7 2014 %D 2014 %K S3D, binocular, eye&#8208;tracking %J ACM Transactions on Graphics %V 33 %N 4 %& 1 %P 1 - 8 %Z sequence number: 145 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2014 %O ACM SIGGRAPH 2014 Vancouver, BC, Canada
Templin, K., Didyk, P., Myszkowski, K., and Seidel, H.-P. 2014b. Perceptually-motivated Stereoscopic Film Grain. Computer Graphics Forum (Proc. Pacific Graphics 2014) 33, 7.
Export
BibTeX
@article{Templin2014b, TITLE = {Perceptually-motivated Stereoscopic Film Grain}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12503}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2014}, DATE = {2014}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {33}, NUMBER = {7}, PAGES = {349--358}, BOOKTITLE = {22nd Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2014)}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually-motivated Stereoscopic Film Grain : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5DF2-B %R 10.1111/cgf.12503 %7 2014-10-28 %D 2014 %J Computer Graphics Forum %V 33 %N 7 %& 349 %P 349 - 358 %I Wiley-Blackwell %C Oxford %B 22nd Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2014 PG 2014 8 to 10 Oct 2014, Seoul, South Korea
Vangorp, P., Mantiuk, R., Bazyluk, B., et al. 2014. Depth from HDR: Depth Induction or Increased Realism? SAP 2014, ACM Symposium on Applied Perception, ACM.
Export
BibTeX
@inproceedings{Vangorp2014, TITLE = {Depth from {HDR}: {Depth} Induction or Increased Realism?}, AUTHOR = {Vangorp, Peter and Mantiuk, Rafal and Bazyluk, Bartosz and Myszkowski, Karol and Mantiuk, Rados{\textbackslash}law and Watt, Simon J. and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4503-3009-1}, DOI = {10.1145/2628257.2628258}, PUBLISHER = {ACM}, YEAR = {2014}, DATE = {2014}, BOOKTITLE = {SAP 2014, ACM Symposium on Applied Perception}, EDITOR = {Bailey, Reynold and Kuhl, Scott}, PAGES = {71--78}, ADDRESS = {Vancouver, Canada}, }
Endnote
%0 Conference Proceedings %A Vangorp, Peter %A Mantiuk, Rafal %A Bazyluk, Bartosz %A Myszkowski, Karol %A Mantiuk, Rados\law %A Watt, Simon J. %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Depth from HDR: Depth Induction or Increased Realism? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-34DB-5 %R 10.1145/2628257.2628258 %D 2014 %B ACM Symposium on Applied Perception %Z date of event: 2014-08-08 - 2014-08-09 %C Vancouver, Canada %K binocular disparity, contrast, luminance, stereo 3D %B SAP 2014 %E Bailey, Reynold; Kuhl, Scott %P 71 - 78 %I ACM %@ 978-1-4503-3009-1
2013
Čadík, M., Herzog, R., Mantiuk, R., Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2013. Learning to Predict Localized Distortions in Rendered Images. Computer Graphics Forum (Proc. Pacific Graphics 2013) 32, 7.
Export
BibTeX
@article{CadikPG2013, TITLE = {Learning to Predict Localized Distortions in Rendered Images}, AUTHOR = {{\v C}ad{\'i}k, Martin and Herzog, Robert and Mantiuk, Rafa{\l} and Mantiuk, Rados{\l}aw and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/cgf.12248}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford}, YEAR = {2013}, DATE = {2013}, JOURNAL = {Computer Graphics Forum (Proc. Pacific Graphics)}, VOLUME = {32}, NUMBER = {7}, PAGES = {401--410}, BOOKTITLE = {21st Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2013)}, }
Endnote
%0 Journal Article %A &#268;ad&#237;k, Martin %A Herzog, Robert %A Mantiuk, Rafa&#322; %A Mantiuk, Rados&#322;aw %A Myszkowski, Karol %A Seidel, Hans-Peter %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Learning to Predict Localized Distortions in Rendered Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5DF9-E %R 10.1111/cgf.12248 %7 2014-11-25 %D 2013 %J Computer Graphics Forum %V 32 %N 7 %& 401 %P 401 - 410 %I Wiley-Blackwell %C Oxford %B 21st Pacific Conference on Computer Graphics and Applications %O Pacific Graphics 2013 PG 2013 October 7-9, 2013, Singapore
Kellnhofer, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2013. Optimizing Disparity for Motion in Depth. Computer Graphics Forum (Proc. Eurographics Symposium on Rendering 2013) 32, 4.
Abstract
Beyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.
Export
BibTeX
@article{Kellnhofer2013, TITLE = {Optimizing Disparity for Motion in Depth}, AUTHOR = {Kellnhofer, Petr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/cgf.12160}, LOCALID = {Local-ID: AAA9E8B7CDD4AD1FC1257BFD004E5D30-Kellnhofer2013}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Beyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics Symposium on Rendering)}, VOLUME = {32}, NUMBER = {4}, PAGES = {143--152}, BOOKTITLE = {Eurographics Symposium on Rendering 2013}, EDITOR = {Holzschuch, N. and Rusinkiewicz, S.}, }
Endnote
%0 Journal Article %A Kellnhofer, Petr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Optimizing Disparity for Motion in Depth : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D13-B %R 10.1111/cgf.12160 %F OTHER: Local-ID: AAA9E8B7CDD4AD1FC1257BFD004E5D30-Kellnhofer2013 %7 2013 %D 2013 %X Beyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion. %J Computer Graphics Forum %V 32 %N 4 %& 143 %P 143 - 152 %I Wiley-Blackwell %C Oxford, UK %@ false %B Eurographics Symposium on Rendering 2013 %O EGSR 2013 Eurographics Symposium on Rendering 2013 Zaragoza, 19 - 21 June, 2013
2012
Banterle, F., Artusi, A., Aydin, T.O., et al. 2012. Mapping Images to Target Devices: Spatial, Temporal, Stereo, Tone, and Color. EG 2012 - Tutorials (EUROGRAPHICS 2012), Eurographics Association.
Export
BibTeX
@inproceedings{Didyk2012Course, TITLE = {Mapping Images to Target Devices: {Spatial}, Temporal, Stereo, Tone, and Color}, AUTHOR = {Banterle, Francesco and Artusi, Alessandro and Aydin, Tunc O. and Didyk, Piotr and Eisemann, Elmar and Gutierrez, Diego and Mantiuk, Rafal and Myszkowski, Karol and Ritschel, Tobias}, LANGUAGE = {eng}, ISSN = {1017-4656}, DOI = {10.2312/conf/EG2012/tutorials/t1}, PUBLISHER = {Eurographics Association}, YEAR = {2012}, BOOKTITLE = {EG 2012 -- Tutorials (EUROGRAPHICS 2012)}, EDITOR = {Pajarola, Renato and Spagnuolo, Michela}, EID = {T1}, ADDRESS = {Cagliari, Sardinia, Italy}, }
Endnote
%0 Conference Proceedings %A Banterle, Francesco %A Artusi, Alessandro %A Aydin, Tunc O. %A Didyk, Piotr %A Eisemann, Elmar %A Gutierrez, Diego %A Mantiuk, Rafal %A Myszkowski, Karol %A Ritschel, Tobias %+ External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Mapping Images to Target Devices: Spatial, Temporal, Stereo, Tone, and Color : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F3BC-E %R 10.2312/conf/EG2012/tutorials/t1 %D 2012 %B The European Association for Computer Graphics 33rd Annual Conference %Z date of event: 2012-05-06 - 2012-05-09 %C Cagliari, Sardinia, Italy %B EG 2012 - Tutorials %E Pajarola, Renato; Spagnuolo, Michela %Z sequence number: T1 %I Eurographics Association %@ false
Čadík, M., Herzog, R., Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2012. New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Graphics Artifacts. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2012) 31, 6.
Export
BibTeX
@article{cadik12iqm_evaluation, TITLE = {New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Graphics Artifacts}, AUTHOR = {{\v C}ad{\'i}k, Martin and Herzog, Robert and Mantiuk, Rafal and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2366145.2366166}, LOCALID = {Local-ID: 1D6D7862B7800D8DC1257AD7003415AE-cadik12iqm_evaluation}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2012}, DATE = {2012}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {31}, NUMBER = {6}, PAGES = {1--10}, EID = {147}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2012}, }
Endnote
%0 Journal Article %A &#268;ad&#237;k, Martin %A Herzog, Robert %A Mantiuk, Rafal %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Graphics Artifacts : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-166E-6 %R 10.1145/2366145.2366166 %F OTHER: Local-ID: 1D6D7862B7800D8DC1257AD7003415AE-cadik12iqm_evaluation %7 2012 %D 2012 %J ACM Transactions on Graphics %V 31 %N 6 %& 1 %P 1 - 10 %Z sequence number: 147 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2012 %O ACM SIGGRAPH Asia 2012 Singapore, 28 November - 1 December 2012
Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., Seidel, H.-P., and Matusik, W. 2012a. A Luminance-contrast-aware Disparity Model and Applications. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2012) 31, 6.
Export
BibTeX
@article{Didyk2012SigAsia, TITLE = {A Luminance-contrast-aware Disparity Model and Applications}, AUTHOR = {Didyk, Piotr and Ritschel, Tobias and Eisemann, Elmar and Myszkowski, Karol and Seidel, Hans-Peter and Matusik, Wojciech}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2366145.2366203}, LOCALID = {Local-ID: C754E5AADEF5EA2AC1257AFE0056029B-Didyk2012SigAsia}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2012}, DATE = {2012}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {31}, NUMBER = {6}, PAGES = {184:1--184:10}, EID = {184}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2012}, }
Endnote
%0 Journal Article %A Didyk, Piotr %A Ritschel, Tobias %A Eisemann, Elmar %A Myszkowski, Karol %A Seidel, Hans-Peter %A Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T A Luminance-contrast-aware Disparity Model and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F3C4-9 %R 10.1145/2366145.2366203 %F OTHER: Local-ID: C754E5AADEF5EA2AC1257AFE0056029B-Didyk2012SigAsia %D 2012 %J ACM Transactions on Graphics %V 31 %N 6 %& 184:1 %P 184:1 - 184:10 %Z sequence number: 184 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2012 %O Singapore, 28 November - 1 December ACM SIGGRAPH Asia 2012
Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., and Seidel, H.-P. 2012b. Apparent Stereo: The Cornsweet Illusion Can Enhance Perceived Depth. Human Vision and Electronic Imaging XVII (HVEI 2012), SPIE/IS&T.
Export
BibTeX
@inproceedings{Didyk2012Cornsweet, TITLE = {Apparent Stereo: The {Cornsweet} Illusion Can Enhance Perceived Depth}, AUTHOR = {Didyk, Piotr and Ritschel, Tobias and Eisemann, Elmar and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0277-786X}, ISBN = {9780819489388}, DOI = {10.1117/12.907612}, LOCALID = {Local-ID: B0D8F2F7DF789CF4C1257A710043B8CF-Didyk2012Cornsweet}, PUBLISHER = {SPIE/IS\&T}, YEAR = {2012}, DATE = {2012}, BOOKTITLE = {Human Vision and Electronic Imaging XVII (HVEI 2012)}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and de Ridder, Huib}, PAGES = {1--12}, SERIES = {Proceedings of SPIE}, VOLUME = {8291}, ADDRESS = {Burlingame, CA}, }
Endnote
%0 Conference Proceedings %A Didyk, Piotr %A Ritschel, Tobias %A Eisemann, Elmar %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Apparent Stereo: The Cornsweet Illusion Can Enhance Perceived Depth : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-13C8-5 %R 10.1117/12.907612 %F OTHER: Local-ID: B0D8F2F7DF789CF4C1257A710043B8CF-Didyk2012Cornsweet %D 2012 %B Human Vision and Electronic Imaging XVII %Z date of event: 2012-01-23 - 2012-01-26 %C Burlingame, CA %B Human Vision and Electronic Imaging XVII %E Rogowitz, Bernice E.; Pappas, Thrasyvoulos N.; de Ridder, Huib %P 1 - 12 %I SPIE/IS&T %@ 9780819489388 %B Proceedings of SPIE %N 8291 %@ false
Didyk, P., Ritschel, T., Eisemann, E., and Myszkowski, K. 2012c. Exceeding Physical Limitations: Apparent Display Qualities. In: Perceptual Digital Imaging. CRC, Boca Raton, FL.
Export
BibTeX
@incollection{Didyk2012Chapter, TITLE = {Exceeding Physical Limitations: {Apparent} Display Qualities}, AUTHOR = {Didyk, Piotr and Ritschel, Tobias and Eisemann, Elmar and Myszkowski, Karol}, LANGUAGE = {eng}, ISBN = {9781439868560}, LOCALID = {Local-ID: 68CF453A32B7C773C1257A710045D6CB-Didyk2012Chapter}, PUBLISHER = {CRC}, ADDRESS = {Boca Raton, FL}, YEAR = {2012}, DATE = {2012}, BOOKTITLE = {Perceptual Digital Imaging}, EDITOR = {Lukac, Ratislav}, PAGES = {469--501}, }
Endnote
%0 Book Section %A Didyk, Piotr %A Ritschel, Tobias %A Eisemann, Elmar %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Exceeding Physical Limitations: Apparent Display Qualities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-13CF-8 %F OTHER: Local-ID: 68CF453A32B7C773C1257A710045D6CB-Didyk2012Chapter %D 2012 %B Perceptual Digital Imaging %E Lukac, Ratislav %P 469 - 501 %I CRC %C Boca Raton, FL %@ 9781439868560
Herzog, R., Cadík, M., Aydin, T.O., Kim, K.I., Myszkowski, K., and Seidel, H.-P. 2012. NoRM: No-reference Image Quality Metric for Realistic Image Synthesis. Computer Graphics Forum (Proc. EUROGRAPHICS 2012) 31, 2.
Abstract
Synthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose no-reference image quality assessment is a difficult problem, we show in a subjective study that the performance of a dedicated no-reference metric as presented in this paper can match the state-of-the-art metrics that do require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts.
Export
BibTeX
@article{NoRM_EG2012, TITLE = {{NoRM}: {No-reference} Image Quality Metric for Realistic Image Synthesis}, AUTHOR = {Herzog, Robert and Cad{\'i}k, Martin and Aydin, Tunc Ozan and Kim, Kwang In and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/j.1467-8659.2012.03055.x}, LOCALID = {Local-ID: 673028A8C798FD45C1257A47004B2978-NoRM_EG2012}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Synthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose no-reference image quality assessment is a difficult problem, we show in a subjective study that the performance of a dedicated no-reference metric as presented in this paper can match the state-of-the-art metrics that do require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {31}, NUMBER = {2}, PAGES = {545--554}, BOOKTITLE = {EUROGRAPHICS 2012}, EDITOR = {Cignoni, Paolo and Ertl, Thomas}, }
Endnote
%0 Journal Article %A Herzog, Robert %A Cad&#237;k, Martin %A Aydin, Tunc Ozan %A Kim, Kwang In %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T NoRM: No-reference Image Quality Metric for Realistic Image Synthesis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1586-9 %R 10.1111/j.1467-8659.2012.03055.x %F OTHER: Local-ID: 673028A8C798FD45C1257A47004B2978-NoRM_EG2012 %7 2012-06-14 %D 2012 %X Synthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose no-reference image quality assessment is a difficult problem, we show in a subjective study that the performance of a dedicated no-reference metric as presented in this paper can match the state-of-the-art metrics that do require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts. %J Computer Graphics Forum %V 31 %N 2 %& 545 %P 545 - 554 %I Wiley-Blackwell %C Oxford, UK %@ false %B EUROGRAPHICS 2012 %O EUROGRAPHICS 2012 The European Association for Computer Graphics 33rd Annual Conference, Cagliari, Sardinia, Italy, May 13th &#8211; 18th, 2012 EG 2012
Nguyen, C., Ritschel, T., Myszkowski, K., Eisemann, E., and Seidel, H.-P. 2012. 3D Material Style Transfer. Computer Graphics Forum (Proc. EUROGRAPHICS 2012) 31, 2.
Abstract
This work proposes a technique to transfer the material style or mood from a guide source such as an image or video onto a target 3D scene. It formulates the problem as a combinatorial optimization of assigning discrete materials extracted from the guide source to discrete objects in the target 3D scene. The assignment is optimized to fulfill multiple goals: overall image mood based on several image statistics; spatial material organization and grouping as well as geometric similarity between objects that were assigned to similar materials. To be able to use common uncalibrated images and videos with unknown geometry and lighting as guides, a material estimation derives perceptually plausible reflectance, specularity, glossiness, and texture. Finally, results produced by our method are compared to manual material assignments in a perceptual study.
Export
BibTeX
@article{Nguyen2012z, TITLE = {{3D} Material Style Transfer}, AUTHOR = {Nguyen, Chuong and Ritschel, Tobias and Myszkowski, Karol and Eisemann, Elmar and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/j.1467-8659.2012.03022.x}, LOCALID = {Local-ID: 3C190E59F48516AFC1257B0100644708-Nguyen2012}, PUBLISHER = {Wiley-Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {This work proposes a technique to transfer the material style or mood from a guide source such as an image or video onto a target 3D scene. It formulates the problem as a combinatorial optimization of assigning discrete materials extracted from the guide source to discrete objects in the target 3D scene. The assignment is optimized to fulfill multiple goals: overall image mood based on several image statistics; spatial material organization and grouping as well as geometric similarity between objects that were assigned to similar materials. To be able to use common uncalibrated images and videos with unknown geometry and lighting as guides, a material estimation derives perceptually plausible reflectance, specularity, glossiness, and texture. Finally, results produced by our method are compared to manual material assignments in a perceptual study.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, EDITOR = {Cignoni, Paolo and Ertl, Thomas}, VOLUME = {31}, NUMBER = {2}, PAGES = {431--438}, BOOKTITLE = {EUROGRAPHICS 2012}, EDITOR = {Cignoni, Paolo and Ertl, Thomas}, }
Endnote
%0 Journal Article %A Nguyen, Chuong %A Ritschel, Tobias %A Myszkowski, Karol %A Eisemann, Elmar %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T 3D Material Style Transfer : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1537-C %F OTHER: Local-ID: 3C190E59F48516AFC1257B0100644708-Nguyen2012 %R 10.1111/j.1467-8659.2012.03022.x %7 2012-06-07 %D 2012 %X This work proposes a technique to transfer the material style or mood from a guide source such as an image or video onto a target 3D scene. It formulates the problem as a combinatorial optimization of assigning discrete materials extracted from the guide source to discrete objects in the target 3D scene. The assignment is optimized to fulfill multiple goals: overall image mood based on several image statistics; spatial material organization and grouping as well as geometric similarity between objects that were assigned to similar materials. To be able to use common uncalibrated images and videos with unknown geometry and lighting as guides, a material estimation derives perceptually plausible reflectance, specularity, glossiness, and texture. Finally, results produced by our method are compared to manual material assignments in a perceptual study. %J Computer Graphics Forum %V 31 %N 2 %& 431 %P 431 - 438 %I Wiley-Blackwell %C Oxford, UK %@ false %B EUROGRAPHICS 2012 %O EUROGRAPHICS 2012 EG 2012 The European Association for Computer Graphics 33rd Annual Conference, Cagliari, Sardinia, Italy, May 13th &#8211; 18th, 2012
Ritschel, T., Templin, K., Myszkowski, K., and Seidel, H.-P. 2012. Virtual Passepartouts. Non-Photorealistic Animation and Rendering (NPAR 2012), Eurographics Association.
Abstract
In traditional media, such as photography and painting, a cardboard sheet with a cutout (called \emphpassepartout}) is frequently placed on top of an image. One of its functions is to increase the depth impression via the looking-through-a-window'' metaphor. This paper shows how an improved 3D~effect can be achieved by using a \emph{virtual passepartout: a 2D framing that selectively masks the 3D shape and leads to additional occlusion events between the virtual world and the frame. We introduce a pipeline to design virtual passepartouts interactively as a simple post-process on RGB images augmented with depth information. Additionally, an automated approach finds the optimal virtual passepartout for a given scene. Virtual passepartouts can be used to enhance depth depiction in images and videos with depth information, renderings, stereo images and the fabrication of physical passepartouts.
Export
BibTeX
@inproceedings{RitschelTMS2012, TITLE = {Virtual Passepartouts}, AUTHOR = {Ritschel, Tobias and Templin, Krzysztof and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-905673-90-6}, DOI = {10.2312/PE/NPAR/NPAR12/057-063}, LOCALID = {Local-ID: AF8C88CA4485E3B1C1257A4500606C5D-RitschelTMS2012}, PUBLISHER = {Eurographics Association}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {In traditional media, such as photography and painting, a cardboard sheet with a cutout (called \emphpassepartout}) is frequently placed on top of an image. One of its functions is to increase the depth impression via the looking-through-a-window'' metaphor. This paper shows how an improved 3D~effect can be achieved by using a \emph{virtual passepartout: a 2D framing that selectively masks the 3D shape and leads to additional occlusion events between the virtual world and the frame. We introduce a pipeline to design virtual passepartouts interactively as a simple post-process on RGB images augmented with depth information. Additionally, an automated approach finds the optimal virtual passepartout for a given scene. Virtual passepartouts can be used to enhance depth depiction in images and videos with depth information, renderings, stereo images and the fabrication of physical passepartouts.}, BOOKTITLE = {Non-Photorealistic Animation and Rendering (NPAR 2012)}, EDITOR = {Asente, Paul and Grimm, Cindy}, PAGES = {57--63}, ADDRESS = {Annecy, France}, }
Endnote
%0 Conference Proceedings %A Ritschel, Tobias %A Templin, Krzysztof %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Virtual Passepartouts : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-13D3-B %R 10.2312/PE/NPAR/NPAR12/057-063 %F OTHER: Local-ID: AF8C88CA4485E3B1C1257A4500606C5D-RitschelTMS2012 %D 2012 %B Non-Photorealistic Animation and Rendering 2012 %Z date of event: 2012-06-04 - 2012-06-06 %C Annecy, France %X In traditional media, such as photography and painting, a cardboard sheet with a cutout (called \emphpassepartout}) is frequently placed on top of an image. One of its functions is to increase the depth impression via the looking-through-a-window'' metaphor. This paper shows how an improved 3D~effect can be achieved by using a \emph{virtual passepartout: a 2D framing that selectively masks the 3D shape and leads to additional occlusion events between the virtual world and the frame. We introduce a pipeline to design virtual passepartouts interactively as a simple post-process on RGB images augmented with depth information. Additionally, an automated approach finds the optimal virtual passepartout for a given scene. Virtual passepartouts can be used to enhance depth depiction in images and videos with depth information, renderings, stereo images and the fabrication of physical passepartouts. %B Non-Photorealistic Animation and Rendering %E Asente, Paul; Grimm, Cindy %P 57 - 63 %I Eurographics Association %@ 978-3-905673-90-6
Templin, K., Didyk, P., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2012. Highlight Microdisparity for Improved Gloss Depiction. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2012) 31, 4.
Abstract
Human stereo perception of glossy materials is substantially different from the perception of diffuse surfaces: A single point on a diffuse object appears the same for both eyes, whereas it appears different to both eyes on a specular object. As highlights are blurry reflections of light sources they have depth themselves, which is different from the depth of the reflecting surface. We call this difference in depth impression the highlight disparity''. Due to artistic motivation, for technical reasons, or because of incomplete data, highlights often have to be depicted on-surface, without any disparity. However, it has been shown that a lack of disparity decreases the perceived glossiness and authenticity of a material. To remedy this contradiction, our work introduces a technique for depiction of glossy materials, which improves over simple on-surface highlights, and avoids the problems of physical highlights. Our technique is computationally simple, can be easily integrated in an existing (GPU) shading system, and allows for local and interactive artistic control.
Export
BibTeX
@article{Templin2012, TITLE = {Highlight Microdisparity for Improved Gloss Depiction}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2185520.2185588}, LOCALID = {Local-ID: BDB99D9DBF6B290EC1257A4500551595-Templin2012}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Human stereo perception of glossy materials is substantially different from the perception of diffuse surfaces: A single point on a diffuse object appears the same for both eyes, whereas it appears different to both eyes on a specular object. As highlights are blurry reflections of light sources they have depth themselves, which is different from the depth of the reflecting surface. We call this difference in depth impression the highlight disparity''. Due to artistic motivation, for technical reasons, or because of incomplete data, highlights often have to be depicted on-surface, without any disparity. However, it has been shown that a lack of disparity decreases the perceived glossiness and authenticity of a material. To remedy this contradiction, our work introduces a technique for depiction of glossy materials, which improves over simple on-surface highlights, and avoids the problems of physical highlights. Our technique is computationally simple, can be easily integrated in an existing (GPU) shading system, and allows for local and interactive artistic control.}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {31}, NUMBER = {4}, PAGES = {1--5}, EID = {92}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2012}, }
Endnote
%0 Journal Article %A Templin, Krzysztof %A Didyk, Piotr %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Highlight Microdisparity for Improved Gloss Depiction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1617-8 %F OTHER: Local-ID: BDB99D9DBF6B290EC1257A4500551595-Templin2012 %R 10.1145/2185520.2185588 %7 2012-07-01 %D 2012 %X Human stereo perception of glossy materials is substantially different from the perception of diffuse surfaces: A single point on a diffuse object appears the same for both eyes, whereas it appears different to both eyes on a specular object. As highlights are blurry reflections of light sources they have depth themselves, which is different from the depth of the reflecting surface. We call this difference in depth impression the highlight disparity''. Due to artistic motivation, for technical reasons, or because of incomplete data, highlights often have to be depicted on-surface, without any disparity. However, it has been shown that a lack of disparity decreases the perceived glossiness and authenticity of a material. To remedy this contradiction, our work introduces a technique for depiction of glossy materials, which improves over simple on-surface highlights, and avoids the problems of physical highlights. Our technique is computationally simple, can be easily integrated in an existing (GPU) shading system, and allows for local and interactive artistic control. %J ACM Transactions on Graphics %V 31 %N 4 %& 1 %P 1 - 5 %Z sequence number: 92 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2012 %O ACM SIGGRAPH 2012 Los Angeles, California, 5 - 9 August 2012
2011
Čadík, M., Aydin, T.O., Myszkowski, K., and Seidel, H.-P. 2011. On Evaluation of Video Quality Metrics: an HDR Dataset for Computer Graphics Applications. Human Vision and Electronic Imaging XVI (HVEI 2011), SPIE.
Export
BibTeX
@inproceedings{Cadik2011, TITLE = {On Evaluation of Video Quality Metrics: an {HDR} Dataset for Computer Graphics Applications}, AUTHOR = {{\v C}ad{\'i}k, Martin and Aydin, Tunc Ozan and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-0-8194-8402-4}, URL = {http://dx.doi.org/10.1117/12.878875}, DOI = {10.1117/12.878875}, PUBLISHER = {SPIE}, YEAR = {2011}, DATE = {2011}, BOOKTITLE = {Human Vision and Electronic Imaging XVI (HVEI 2011)}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N.}, PAGES = {1--9}, EID = {78650R}, SERIES = {Proceedings of SPIE}, VOLUME = {7865}, ADDRESS = {San Francisco, CA, USA}, }
Endnote
%0 Conference Proceedings %A &#268;ad&#237;k, Martin %A Aydin, Tunc Ozan %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T On Evaluation of Video Quality Metrics: an HDR Dataset for Computer Graphics Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-13DF-B %F EDOC: 618862 %R 10.1117/12.878875 %U http://dx.doi.org/10.1117/12.878875 %D 2011 %B Human Vision and Electronic Imaging XVI %Z date of event: 2011-02-24 - 2011-01-27 %C San Francisco, CA, USA %B Human Vision and Electronic Imaging XVI %E Rogowitz, Bernice E.; Pappas, Thrasyvoulos N. %P 1 - 9 %Z sequence number: 78650R %I SPIE %@ 978-0-8194-8402-4 %B Proceedings of SPIE %N 7865
Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., and Seidel, H.-P. 2011. A Perceptual Model for Disparity. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2011) 30, 4.
Abstract
Binocular disparity is an important cue for the human visual system to recognize spatial layout, both in reality and simulated virtual worlds. This paper introduces a perceptual model of disparity for computer graphics that is used to define a metric to compare a stereo image to an alternative stereo image and to estimate the magnitude of the perceived disparity change. Our model can be used to assess the effect of disparity to control the level of undesirable distortions or enhancements (introduced on purpose). A number of psycho-visual experiments are conducted to quantify the mutual effect of disparity magnitude and frequency to derive the model. Besides difference prediction, other applications include compression, and re-targeting. We also present novel applications in form of hybrid stereo images and backward-compatible stereo. The latter minimizes disparity in order to convey a stereo impression if special equipment is used but produces images that appear almost ordinary to the naked eye. The validity of our model and difference metric is again confirmed in a study.
Export
BibTeX
@article{DidykREMS2011, TITLE = {A Perceptual Model for Disparity}, AUTHOR = {Didyk, Piotr and Ritschel, Tobias and Eisemann, Elmar and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, DOI = {10.1145/2010324.1964991}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Binocular disparity is an important cue for the human visual system to recognize spatial layout, both in reality and simulated virtual worlds. This paper introduces a perceptual model of disparity for computer graphics that is used to define a metric to compare a stereo image to an alternative stereo image and to estimate the magnitude of the perceived disparity change. Our model can be used to assess the effect of disparity to control the level of undesirable distortions or enhancements (introduced on purpose). A number of psycho-visual experiments are conducted to quantify the mutual effect of disparity magnitude and frequency to derive the model. Besides difference prediction, other applications include compression, and re-targeting. We also present novel applications in form of hybrid stereo images and backward-compatible stereo. The latter minimizes disparity in order to convey a stereo impression if special equipment is used but produces images that appear almost ordinary to the naked eye. The validity of our model and difference metric is again confirmed in a study.}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {30}, NUMBER = {4}, PAGES = {1--10}, EID = {96}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2011}, }
Endnote
%0 Journal Article %A Didyk, Piotr %A Ritschel, Tobias %A Eisemann, Elmar %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Perceptual Model for Disparity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1388-F %F EDOC: 618890 %R 10.1145/2010324.1964991 %7 2011 %D 2011 %* Review method: peer-reviewed %X Binocular disparity is an important cue for the human visual system to recognize spatial layout, both in reality and simulated virtual worlds. This paper introduces a perceptual model of disparity for computer graphics that is used to define a metric to compare a stereo image to an alternative stereo image and to estimate the magnitude of the perceived disparity change. Our model can be used to assess the effect of disparity to control the level of undesirable distortions or enhancements (introduced on purpose). A number of psycho-visual experiments are conducted to quantify the mutual effect of disparity magnitude and frequency to derive the model. Besides difference prediction, other applications include compression, and re-targeting. We also present novel applications in form of hybrid stereo images and backward-compatible stereo. The latter minimizes disparity in order to convey a stereo impression if special equipment is used but produces images that appear almost ordinary to the naked eye. The validity of our model and difference metric is again confirmed in a study. %J ACM Transactions on Graphics %V 30 %N 4 %& 1 %P 1 - 10 %Z sequence number: 96 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2011 %O ACM SIGGRAPH 2011 Vancouver, BC, Canada
Pajak, D., Herzog, R., Myszkowski, K., Eisemann, E., and Seidel, H.-P. 2011. Scalable Remote Rendering with Depth and Motion-flow Augmented Streaming. Computer Graphics Forum (Proc. EUROGPRAPHICS 2011) 30, 2.
Abstract
In this work, we focus on efficient compression and streaming of frames rendered from a dynamic 3D model. Remote rendering and on-the-fly streaming become increasingly attractive for interactive applications. Data is kept confidential and only images are sent to the client. Even if the client's hardware resources are modest, the user can interact with state-of-the-art rendering applications executed on the server. Our solution focuses on augmented video information, e.g., by depth, which is key to increase robustness with respect to data loss, image reconstruction, and is an important feature for stereo vision and other client-side applications. Two major challenges arise in such a setup. First, the server workload has to be controlled to support many clients, second the data transfer needs to be efficient. Consequently, our contributions are twofold. First, we reduce the server-based computations by making use of sparse sampling and temporal consistency to avoid expensive pixel evaluations. Second, our data-transfer solution takes limited bandwidths into account, is robust to information loss, and compression and decompression are efficient enough to support real-time interaction. Our key insight is to tailor our method explicitly for rendered 3D content and shift some computations on client GPUs, to better balance the server/client workload. Our framework is progressive, scalable, and allows us to stream augmented high-resolution (e.g., HD-ready) frames with small bandwidth on standard hardware.
Export
BibTeX
@article{HerzogEG2011, TITLE = {Scalable Remote Rendering with Depth and Motion-flow Augmented Streaming}, AUTHOR = {Pajak, Dawid and Herzog, Robert and Myszkowski, Karol and Eisemann, Elmar and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1111/j.1467-8659.2011.01871.x}, PUBLISHER = {Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {In this work, we focus on efficient compression and streaming of frames rendered from a dynamic 3D model. Remote rendering and on-the-fly streaming become increasingly attractive for interactive applications. Data is kept confidential and only images are sent to the client. Even if the client's hardware resources are modest, the user can interact with state-of-the-art rendering applications executed on the server. Our solution focuses on augmented video information, e.g., by depth, which is key to increase robustness with respect to data loss, image reconstruction, and is an important feature for stereo vision and other client-side applications. Two major challenges arise in such a setup. First, the server workload has to be controlled to support many clients, second the data transfer needs to be efficient. Consequently, our contributions are twofold. First, we reduce the server-based computations by making use of sparse sampling and temporal consistency to avoid expensive pixel evaluations. Second, our data-transfer solution takes limited bandwidths into account, is robust to information loss, and compression and decompression are efficient enough to support real-time interaction. Our key insight is to tailor our method explicitly for rendered 3D content and shift some computations on client GPUs, to better balance the server/client workload. Our framework is progressive, scalable, and allows us to stream augmented high-resolution (e.g., HD-ready) frames with small bandwidth on standard hardware.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGPRAPHICS)}, VOLUME = {30}, NUMBER = {2}, PAGES = {415--424}, BOOKTITLE = {EUROGRAPHICS 2011 (EUROGPRAPHICS 2011)}, EDITOR = {Chen, Min and Deussen, Oliver}, }
Endnote
%0 Journal Article %A Pajak, Dawid %A Herzog, Robert %A Myszkowski, Karol %A Eisemann, Elmar %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Scalable Remote Rendering with Depth and Motion-flow Augmented Streaming : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-13F2-E %F EDOC: 618866 %R 10.1111/j.1467-8659.2011.01871.x %7 2011 %D 2011 %* Review method: peer-reviewed %X In this work, we focus on efficient compression and streaming of frames rendered from a dynamic 3D model. Remote rendering and on-the-fly streaming become increasingly attractive for interactive applications. Data is kept confidential and only images are sent to the client. Even if the client's hardware resources are modest, the user can interact with state-of-the-art rendering applications executed on the server. Our solution focuses on augmented video information, e.g., by depth, which is key to increase robustness with respect to data loss, image reconstruction, and is an important feature for stereo vision and other client-side applications. Two major challenges arise in such a setup. First, the server workload has to be controlled to support many clients, second the data transfer needs to be efficient. Consequently, our contributions are twofold. First, we reduce the server-based computations by making use of sparse sampling and temporal consistency to avoid expensive pixel evaluations. Second, our data-transfer solution takes limited bandwidths into account, is robust to information loss, and compression and decompression are efficient enough to support real-time interaction. Our key insight is to tailor our method explicitly for rendered 3D content and shift some computations on client GPUs, to better balance the server/client workload. Our framework is progressive, scalable, and allows us to stream augmented high-resolution (e.g., HD-ready) frames with small bandwidth on standard hardware. %J Computer Graphics Forum %V 30 %N 2 %& 415 %P 415 - 424 %I Blackwell %C Oxford, UK %B EUROGRAPHICS 2011 %O EUROGPRAPHICS 2011 The European Association for Computer Graphics 32nd Annual Conference ; Llandudno in Wales, UK, April 11th - 15th, 2011 EG 2011
Templin, K., Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., and Seidel, H.-P. 2011. Apparent Resolution Enhancement for Animations. Proceedings SCCG 2011 (SSCG 2011), ACM.
Abstract
Presenting the variety of high resolution images captured by high-quality devices, or generated on the computer, is challenging due to the limited resolution of current display devices. Our recent work addressed this problem by taking into account human perception. By applying a specific motion to a high-resolution image shown on a low-resolution display device, human eye tracking and integration could be exploited to achieve apparent resolution enhancement. To this end, the high-resolution image is decomposed into a sequence of temporally varying low-resolution images that are displayed at high refresh rates. However, this approach is limited to a specific class of simple or constant movements, i.e. panning''. In this work, we generalize this idea to arbitrary motions, as well as to videos with arbitrary motion flow. The resulting image sequences are compared to a range of other down-sampling methods.
Export
BibTeX
@inproceedings{Templin2011, TITLE = {Apparent Resolution Enhancement for Animations}, AUTHOR = {Templin, Krzysztof and Didyk, Piotr and Ritschel, Tobias and Eisemann, Elmar and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4503-1978-2}, DOI = {10.1145/2461217.2461230}, PUBLISHER = {ACM}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Presenting the variety of high resolution images captured by high-quality devices, or generated on the computer, is challenging due to the limited resolution of current display devices. Our recent work addressed this problem by taking into account human perception. By applying a specific motion to a high-resolution image shown on a low-resolution display device, human eye tracking and integration could be exploited to achieve apparent resolution enhancement. To this end, the high-resolution image is decomposed into a sequence of temporally varying low-resolution images that are displayed at high refresh rates. However, this approach is limited to a specific class of simple or constant movements, i.e. panning''. In this work, we generalize this idea to arbitrary motions, as well as to videos with arbitrary motion flow. The resulting image sequences are compared to a range of other down-sampling methods.}, BOOKTITLE = {Proceedings SCCG 2011 (SSCG 2011)}, EDITOR = {Nishita, Tomoyuki and {\v D}urikovi{\v c}, Roman}, PAGES = {85--92}, ADDRESS = {Vini{\v c}n{\'e}, Slovakia}, }
Endnote
%0 Conference Proceedings %A Templin, Krzysztof %A Didyk, Piotr %A Ritschel, Tobias %A Eisemann, Elmar %A Myszkowski, Karol %A Seidel, Hans-Peter %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Apparent Resolution Enhancement for Animations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-138B-9 %F EDOC: 618886 %R 10.1145/2461217.2461230 %D 2011 %B 27th Spring Conference on Computer Graphics %Z date of event: 2011-04-28 - 2011-04-30 %C Vini&#269;n&#233;, Slovakia %X Presenting the variety of high resolution images captured by high-quality devices, or generated on the computer, is challenging due to the limited resolution of current display devices. Our recent work addressed this problem by taking into account human perception. By applying a specific motion to a high-resolution image shown on a low-resolution display device, human eye tracking and integration could be exploited to achieve apparent resolution enhancement. To this end, the high-resolution image is decomposed into a sequence of temporally varying low-resolution images that are displayed at high refresh rates. However, this approach is limited to a specific class of simple or constant movements, i.e. panning''. In this work, we generalize this idea to arbitrary motions, as well as to videos with arbitrary motion flow. The resulting image sequences are compared to a range of other down-sampling methods. %B Proceedings SCCG 2011 %E Nishita, Tomoyuki; &#270;urikovi&#269;, Roman %P 85 - 92 %I ACM %@ 978-1-4503-1978-2
2010
Aydin, T.O., Čadík, M., Myszkowski, K., and Seidel, H.-P. 2010a. Visually Significant Edges. ACM Transactions on Applied Perception 7, 4.
Abstract
Numerous image processing and computer graphics methods make use of either explicitly computed strength of image edges, or an implicit edge strength definition that is integrated into their algorithms. In both cases, the end result is highly affected by the computation of edge strength. We address several shortcomings of the widely used gradient magnitude based edge strength model through the computation of a hypothetical human visual system (HVS) response at edge locations. Contrary to gradient magnitude, the resulting visual significance'' values account for various HVS mechanisms such as luminance adaptation and visual masking, and are scaled in perceptually linear units that are uniform across images. The visual significance computation is implemented in a fast multi-scale second generation wavelet framework,which we use to demonstrate the differences in image retargeting, HDR image stitching and tone mapping applications with respect to gradient magnitude model. Our results suggest that simple perceptual models provide qualitative improvements on applications utilizing edge strength at the cost of a modest computational burden.
Export
BibTeX
@article{TuncTAP2010, TITLE = {Visually Significant Edges}, AUTHOR = {Aydin, Tunc Ozan and {\v C}ad{\'i}k, Martin and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1544-3558}, DOI = {10.1145/1823738.1823745}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Numerous image processing and computer graphics methods make use of either explicitly computed strength of image edges, or an implicit edge strength definition that is integrated into their algorithms. In both cases, the end result is highly affected by the computation of edge strength. We address several shortcomings of the widely used gradient magnitude based edge strength model through the computation of a hypothetical human visual system (HVS) response at edge locations. Contrary to gradient magnitude, the resulting visual significance'' values account for various HVS mechanisms such as luminance adaptation and visual masking, and are scaled in perceptually linear units that are uniform across images. The visual significance computation is implemented in a fast multi-scale second generation wavelet framework,which we use to demonstrate the differences in image retargeting, HDR image stitching and tone mapping applications with respect to gradient magnitude model. Our results suggest that simple perceptual models provide qualitative improvements on applications utilizing edge strength at the cost of a modest computational burden.}, JOURNAL = {ACM Transactions on Applied Perception}, VOLUME = {7}, NUMBER = {4}, PAGES = {1--14}, EID = {27}, }
Endnote
%0 Journal Article %A Aydin, Tunc Ozan %A &#268;ad&#237;k, Martin %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Visually Significant Edges : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-179A-A %F EDOC: 537306 %R 10.1145/1823738.1823745 %7 2010 %D 2010 %* Review method: peer-reviewed %X Numerous image processing and computer graphics methods make use of either explicitly computed strength of image edges, or an implicit edge strength definition that is integrated into their algorithms. In both cases, the end result is highly affected by the computation of edge strength. We address several shortcomings of the widely used gradient magnitude based edge strength model through the computation of a hypothetical human visual system (HVS) response at edge locations. Contrary to gradient magnitude, the resulting visual significance'' values account for various HVS mechanisms such as luminance adaptation and visual masking, and are scaled in perceptually linear units that are uniform across images. The visual significance computation is implemented in a fast multi-scale second generation wavelet framework,which we use to demonstrate the differences in image retargeting, HDR image stitching and tone mapping applications with respect to gradient magnitude model. Our results suggest that simple perceptual models provide qualitative improvements on applications utilizing edge strength at the cost of a modest computational burden. %J ACM Transactions on Applied Perception %V 7 %N 4 %& 1 %P 1 - 14 %Z sequence number: 27 %I ACM %C New York, NY %@ false
Aydin, T.O., Čadík, M., Myszkowski, K., and Seidel, H.-P. 2010b. Video Quality Assessment for Computer Graphics Applications. ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2010) 29, 6.
Export
BibTeX
@article{TuncSGAsia2010, TITLE = {Video Quality Assessment for Computer Graphics Applications}, AUTHOR = {Aydin, Tunc Ozan and {\v C}ad{\'i}k, Martin and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, ISBN = {978-1-4503-0439-9}, DOI = {10.1145/1866158.1866187}, LOCALID = {Local-ID: C125675300671F7B-0ED72325CD8F187FC12577CF005BA5C5-TuncSGAsia2010}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2010}, DATE = {2010}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia)}, VOLUME = {29}, NUMBER = {6}, PAGES = {1--12}, EID = {161}, BOOKTITLE = {Proceedings of ACM SIGGRAPH Asia 2010}, EDITOR = {Drettakis, George}, }
Endnote
%0 Journal Article %A Aydin, Tunc Ozan %A &#268;ad&#237;k, Martin %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Video Quality Assessment for Computer Graphics Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1797-0 %F EDOC: 537307 %R 10.1145/1866158.1866187 %F OTHER: Local-ID: C125675300671F7B-0ED72325CD8F187FC12577CF005BA5C5-TuncSGAsia2010 %D 2010 %J ACM Transactions on Graphics %V 29 %N 6 %& 1 %P 1 - 12 %Z sequence number: 161 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH Asia 2010 %O ACM SIGGRAPH Asia 2010 Seoul, South Korea %@ 978-1-4503-0439-9
Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., and Seidel, H.-P. 2010a. Adaptive Image-space Stereo View Synthesis. Vision, Modeling & Visualization (VMV 2010), Eurographics Association.
Export
BibTeX
@inproceedings{Didyk2010b, TITLE = {Adaptive Image-space Stereo View Synthesis}, AUTHOR = {Didyk, Piotr and Ritschel, Tobias and Eisemann, Elmar and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-3-905673-79-1}, DOI = {10.2312/PE/VMV/VMV10/299-306}, PUBLISHER = {Eurographics Association}, YEAR = {2010}, DATE = {2010}, BOOKTITLE = {Vision, Modeling \& Visualization (VMV 2010)}, EDITOR = {Koch, Reinhard and Kolb, Andreas and Rezk-Salama, Christof}, PAGES = {299--306}, ADDRESS = {Siegen, Germany}, }
Endnote
%0 Conference Proceedings %A Didyk, Piotr %A Ritschel, Tobias %A Eisemann, Elmar %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Adaptive Image-space Stereo View Synthesis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-172C-4 %F EDOC: 537308 %R 10.2312/PE/VMV/VMV10/299-306 %D 2010 %B 15th International Workshop on Vision, Modeling, and Visualization %Z date of event: 2010-11-15 - 2010-11-02 %C Siegen, Germany %B Vision, Modeling & Visualization %E Koch, Reinhard; Kolb, Andreas; Rezk-Salama, Christof %P 299 - 306 %I Eurographics Association %@ 978-3-905673-79-1
Didyk, P., Eisemann, E., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2010b. Perceptually-motivated Real-time Temporal Upsampling of 3D Content for High-refresh-rate Displays. Computer Graphics Forum (Proc. EUROGRAPHICS 2010) 29, 2.
Abstract
High-refresh-rate displays (e.\,g.,~120\,Hz) have recently become available on the consumer market and quickly gain on popularity. One of their aims is to reduce the perceived blur created by moving objects that are tracked by the human eye. However, an improvement is only achieved if the video stream is produced at the same high refresh rate (i.\,e.~120\,Hz). Some devices, such as LCD~TVs, solve this problem by converting low-refresh-rate content (i.\,e.~50\,Hz~PAL) into a higher temporal resolution (i.\,e.~200\,Hz) based on two-dimensional optical flow. In our approach, we will show how rendered three-dimensional images produced by recent graphics hardware can be up-sampled more efficiently resulting in higher quality at the same time. Our algorithm relies on several perceptual findings and preserves the naturalness of the original sequence. A psychophysical study validates our approach and illustrates that temporally up-sampled video streams are preferred over the standard low-rate input by the majority of users. We show that our solution improves task performance on high-refresh-rate displays.
Export
BibTeX
@article{Didyk2010, TITLE = {Perceptually-motivated Real-time Temporal Upsampling of {3D} Content for High-refresh-rate Displays}, AUTHOR = {Didyk, Piotr and Eisemann, Elmar and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/j.1467-8659.2009.01641.x}, PUBLISHER = {Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {High-refresh-rate displays (e.\,g.,~120\,Hz) have recently become available on the consumer market and quickly gain on popularity. One of their aims is to reduce the perceived blur created by moving objects that are tracked by the human eye. However, an improvement is only achieved if the video stream is produced at the same high refresh rate (i.\,e.~120\,Hz). Some devices, such as LCD~TVs, solve this problem by converting low-refresh-rate content (i.\,e.~50\,Hz~PAL) into a higher temporal resolution (i.\,e.~200\,Hz) based on two-dimensional optical flow. In our approach, we will show how rendered three-dimensional images produced by recent graphics hardware can be up-sampled more efficiently resulting in higher quality at the same time. Our algorithm relies on several perceptual findings and preserves the naturalness of the original sequence. A psychophysical study validates our approach and illustrates that temporally up-sampled video streams are preferred over the standard low-rate input by the majority of users. We show that our solution improves task performance on high-refresh-rate displays.}, JOURNAL = {Computer Graphics Forum (Proc. EUROGRAPHICS)}, VOLUME = {29}, NUMBER = {2}, PAGES = {713--722}, BOOKTITLE = {EUROGRAPHICS 2010}, EDITOR = {Akenine-M{\"o}ller, Tomas and Zwicker, Matthias}, }
Endnote
%0 Journal Article %A Didyk, Piotr %A Eisemann, Elmar %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually-motivated Real-time Temporal Upsampling of 3D Content for High-refresh-rate Displays : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1778-7 %F EDOC: 537284 %R 10.1111/j.1467-8659.2009.01641.x %7 2010 %D 2010 %X High-refresh-rate displays (e.\,g.,~120\,Hz) have recently become available on the consumer market and quickly gain on popularity. One of their aims is to reduce the perceived blur created by moving objects that are tracked by the human eye. However, an improvement is only achieved if the video stream is produced at the same high refresh rate (i.\,e.~120\,Hz). Some devices, such as LCD~TVs, solve this problem by converting low-refresh-rate content (i.\,e.~50\,Hz~PAL) into a higher temporal resolution (i.\,e.~200\,Hz) based on two-dimensional optical flow. In our approach, we will show how rendered three-dimensional images produced by recent graphics hardware can be up-sampled more efficiently resulting in higher quality at the same time. Our algorithm relies on several perceptual findings and preserves the naturalness of the original sequence. A psychophysical study validates our approach and illustrates that temporally up-sampled video streams are preferred over the standard low-rate input by the majority of users. We show that our solution improves task performance on high-refresh-rate displays. %J Computer Graphics Forum %V 29 %N 2 %& 713 %P 713 - 722 %I Blackwell %C Oxford, UK %@ false %B EUROGRAPHICS 2010 %O EUROGRAPHICS 2010 The European Association for Computer Graphics 31st Annual Conference ; Norrk&#246;ping, Sweden, May3rd - 7th, 2010 EG 2010
Didyk, P., Eisemann, E., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2010c. Apparent Display Resolution Enhancement for Moving Images. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2010) 29, 4.
Export
BibTeX
@article{Didyk2010a, TITLE = {Apparent Display Resolution Enhancement for Moving Images}, AUTHOR = {Didyk, Piotr and Eisemann, Elmar and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, ISBN = {978-1-4503-0210-4}, DOI = {10.1145/1833349.1778850}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2010}, DATE = {2010}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {29}, NUMBER = {4}, PAGES = {1--8}, EID = {113}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2010}, EDITOR = {Hoppe, Hugues}, }
Endnote
%0 Journal Article %A Didyk, Piotr %A Eisemann, Elmar %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Apparent Display Resolution Enhancement for Moving Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1734-0 %F EDOC: 537269 %R 10.1145/1833349.1778850 %7 2010 %D 2010 %J ACM Transactions on Graphics %O TOG %V 29 %N 4 %& 1 %P 1 - 8 %Z sequence number: 113 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2010 %O ACM SIGGRAPH 2010 Los Angeles, CA ; [25-29 July 2010] %@ 978-1-4503-0210-4
Havran, V., Filip, J., and Myszkowski, K. 2010. Bidirectional Texture Function Compression based on the Multilevel Vector Quantization. Computer Graphics Forum 29, 1.
Export
BibTeX
@article{Havran2010CGF, TITLE = {Bidirectional Texture Function Compression based on the Multilevel Vector Quantization}, AUTHOR = {Havran, Vlastimil and Filip, Jiri and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {0167-7055}, DOI = {10.1111/j.1467-8659.2009.01585.x}, PUBLISHER = {Blackwell}, ADDRESS = {Oxford, UK}, YEAR = {2010}, DATE = {2010}, JOURNAL = {Computer Graphics Forum}, VOLUME = {29}, NUMBER = {1}, PAGES = {175--190}, }
Endnote
%0 Journal Article %A Havran, Vlastimil %A Filip, Jiri %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Bidirectional Texture Function Compression based on the Multilevel Vector Quantization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-173F-9 %F EDOC: 537312 %R 10.1111/j.1467-8659.2009.01585.x %7 2010 %D 2010 %* Review method: peer-reviewed %J Computer Graphics Forum %V 29 %N 1 %& 175 %P 175 - 190 %I Blackwell %C Oxford, UK %@ false
Herzog, R., Eisemann, E., Myszkowski, K., and Seidel, H.-P. 2010. Spatio-Temporal Upsampling on the GPU. Proceedings I3D 2010, ACM.
Abstract
Pixel processing is becoming increasingly expensive for real-time applications due to the complexity of today's shaders and high-resolution framebuffers. However, most shading results are spatially or temporally coherent, which allows for sparse sampling and reuse of neighboring pixel values. This paper proposes a simple framework for spatio-temporal upsampling on modern GPUs. In contrast to previous work, which focuses either on temporal or spatial processing on the GPU, we exploit coherence in both. Our algorithm combines adaptive motion-compensated filtering over time and geometry-aware upsampling in image space. It is robust with respect to high-frequency temporal changes, and achieves substantial performance improvements by limiting the number of recomputed samples per frame. At the same time, we increase the quality of spatial upsampling by recovering missing information from previous frames. This temporal strategy also allows us to ensure that the image converges to a higher quality result.
Export
BibTeX
@inproceedings{HerzogI3D2010, TITLE = {Spatio-Temporal Upsampling on the {GPU}}, AUTHOR = {Herzog, Robert and Eisemann, Elmar and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-60558-939-8}, DOI = {10.1145/1730804.1730819}, PUBLISHER = {ACM}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Pixel processing is becoming increasingly expensive for real-time applications due to the complexity of today's shaders and high-resolution framebuffers. However, most shading results are spatially or temporally coherent, which allows for sparse sampling and reuse of neighboring pixel values. This paper proposes a simple framework for spatio-temporal upsampling on modern GPUs. In contrast to previous work, which focuses either on temporal or spatial processing on the GPU, we exploit coherence in both. Our algorithm combines adaptive motion-compensated filtering over time and geometry-aware upsampling in image space. It is robust with respect to high-frequency temporal changes, and achieves substantial performance improvements by limiting the number of recomputed samples per frame. At the same time, we increase the quality of spatial upsampling by recovering missing information from previous frames. This temporal strategy also allows us to ensure that the image converges to a higher quality result.}, BOOKTITLE = {Proceedings I3D 2010}, EDITOR = {Varshney, Amitabh and Wyman, Chris and Aliaga, Daniel and Oliveira, Manuel M.}, PAGES = {91--98}, ADDRESS = {Washington DC, USA}, }
Endnote
%0 Conference Proceedings %A Herzog, Robert %A Eisemann, Elmar %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Spatio-Temporal Upsampling on the GPU : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-178C-C %F EDOC: 537285 %R 10.1145/1730804.1730819 %D 2010 %B ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games %Z date of event: 2010-02-19 - 2010-02-21 %C Washington DC, USA %X Pixel processing is becoming increasingly expensive for real-time applications due to the complexity of today's shaders and high-resolution framebuffers. However, most shading results are spatially or temporally coherent, which allows for sparse sampling and reuse of neighboring pixel values. This paper proposes a simple framework for spatio-temporal upsampling on modern GPUs. In contrast to previous work, which focuses either on temporal or spatial processing on the GPU, we exploit coherence in both. Our algorithm combines adaptive motion-compensated filtering over time and geometry-aware upsampling in image space. It is robust with respect to high-frequency temporal changes, and achieves substantial performance improvements by limiting the number of recomputed samples per frame. At the same time, we increase the quality of spatial upsampling by recovering missing information from previous frames. This temporal strategy also allows us to ensure that the image converges to a higher quality result. %B Proceedings I3D 2010 %E Varshney, Amitabh; Wyman, Chris; Aliaga, Daniel; Oliveira, Manuel M. %P 91 - 98 %I ACM %@ 978-1-60558-939-8
Pajak, D., Čadík, M., Aydin, T.O., Okabe, M., Myszkowski, K., and Seidel, H.-P. 2010a. Contrast Prescription for Multiscale Image Editing. The Visual Computer 26, 6.
Export
BibTeX
@article{Cadik2010, TITLE = {Contrast Prescription for Multiscale Image Editing}, AUTHOR = {Pajak, Dawid and {\v C}ad{\'i}k, Martin and Aydin, Tunc Ozan and Okabe, Makoto and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0178-2789}, DOI = {10.1007/s00371-010-0485-3}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2010}, DATE = {2010}, JOURNAL = {The Visual Computer}, VOLUME = {26}, NUMBER = {6}, PAGES = {739--748}, }
Endnote
%0 Journal Article %A Pajak, Dawid %A &#268;ad&#237;k, Martin %A Aydin, Tunc Ozan %A Okabe, Makoto %A Myszkowski, Karol %A Seidel, Hans-Peter %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Contrast Prescription for Multiscale Image Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1748-4 %F EDOC: 537310 %R 10.1007/s00371-010-0485-3 %7 2010 %D 2010 %* Review method: peer-reviewed %J The Visual Computer %V 26 %N 6 %& 739 %P 739 - 748 %I Springer %C New York, NY %@ false
Pajak, D., Čadík, M., Aydin, T.O., Myszkowski, K., and Seidel, H.-P. 2010b. Visual Maladaptation in Contrast Domain. Human Vision and Electronic Imaging XV (HVEI 2010), SPIE.
Export
BibTeX
@inproceedings{Pajak2010, TITLE = {Visual Maladaptation in Contrast Domain}, AUTHOR = {Pajak, Dawid and {\v C}ad{\'i}k, Martin and Aydin, Tunc Ozan and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {9780819479204}, DOI = {10.1117/12.844934}, PUBLISHER = {SPIE}, YEAR = {2010}, DATE = {2010}, BOOKTITLE = {Human Vision and Electronic Imaging XV (HVEI 2010)}, EDITOR = {Rogowitz, Bernice and Pappas, Thrasyvoulous N.}, PAGES = {1--12}, EID = {752710}, SERIES = {Proceedings of SPIE}, VOLUME = {2527}, ADDRESS = {San Jose, CA, USA}, }
Endnote
%0 Conference Proceedings %A Pajak, Dawid %A &#268;ad&#237;k, Martin %A Aydin, Tunc Ozan %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Visual Maladaptation in Contrast Domain : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-179C-6 %F EDOC: 537311 %R 10.1117/12.844934 %D 2010 %B Human Vision and Electronic Imaging XV %Z date of event: 2010-01-18 - 2010-01-21 %C San Jose, CA, USA %B Human Vision and Electronic Imaging XV %E Rogowitz, Bernice; Pappas, Thrasyvoulous N. %P 1 - 12 %Z sequence number: 752710 %I SPIE %@ 9780819479204 %B Proceedings of SPIE %N 2527
Reinhard, E., Ward, G., Pattanaik, S., Debevec, P., Heidrich, W., and Myszkowski, K., eds. 2010. High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting. Elsevier (Morgan Kaufmann), Burlington, MA.
Export
BibTeX
@book{HDRtextBook2010, TITLE = {High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting}, EDITOR = {Reinhard, Erik and Ward, Greg and Pattanaik, Summant and Debevec, Paul and Heidrich, Wolfgang and Myszkowski, Karol}, LANGUAGE = {eng}, ISBN = {978-0-12-374914-7}, PUBLISHER = {Elsevier (Morgan Kaufmann)}, ADDRESS = {Burlington, MA}, EDITION = {2. ed.}, YEAR = {2010}, DATE = {2010}, PAGES = {XVIII, 650 p.}, }
Endnote
%0 Edited Book %A Reinhard, Erik %A Ward, Greg %A Pattanaik, Summant %A Debevec, Paul %A Heidrich, Wolfgang %A Myszkowski, Karol %+ External Organizations External Organizations External Organizations External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1757-2 %F EDOC: 537309 %@ 978-0-12-374914-7 %I Elsevier (Morgan Kaufmann) %C Burlington, MA %7 2. ed. %D 2010 %P XVIII, 650 p.
2009
Aydin, T.O., Myszkowski, K., and Seidel, H.-P. 2009. Predicting Display Visibility Under Dynamically Changing Lighting Conditions. Computer Graphics Forum (Proc. Eurographics) 28, 2.
Abstract
Display devices, more than ever, are finding their ways into electronic consumer goods as a result of recent trends in providing more functionality and user interaction. Combined with the new developments in display technology towards higher reproducible luminance range, the mobility and variation in capability of display devices are constantly increasing. Consequently, in real life usage it is now very likely that the display emission to be distorted by spatially and temporally varying reflections, and the observer's visual system to be not adapted to the particular display that she is viewing at that moment. The actual perception of the display content cannot be fully understood by only considering steady-state illumination and adaptation conditions. We propose an objective method for display visibility analysis formulating the problem as a full-reference image quality assessment problem, where the display emission under ideal'' conditions is used as the reference for real-life conditions. Our work includes a human visual system model that accounts for maladaptation and temporal recovery of sensitivity. As an example application we integrate our method to a global illumination simulator and analyze the visibility of a car interior display under realistic lighting conditions.
Export
BibTeX
@article{Tunc2009EG, TITLE = {Predicting Display Visibility Under Dynamically Changing Lighting Conditions}, AUTHOR = {Aydin, Tunc O. and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-33AE0A5CE1E47467C125755C00347B6E-Tunc2009EG}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Display devices, more than ever, are finding their ways into electronic consumer goods as a result of recent trends in providing more functionality and user interaction. Combined with the new developments in display technology towards higher reproducible luminance range, the mobility and variation in capability of display devices are constantly increasing. Consequently, in real life usage it is now very likely that the display emission to be distorted by spatially and temporally varying reflections, and the observer's visual system to be not adapted to the particular display that she is viewing at that moment. The actual perception of the display content cannot be fully understood by only considering steady-state illumination and adaptation conditions. We propose an objective method for display visibility analysis formulating the problem as a full-reference image quality assessment problem, where the display emission under ideal'' conditions is used as the reference for real-life conditions. Our work includes a human visual system model that accounts for maladaptation and temporal recovery of sensitivity. As an example application we integrate our method to a global illumination simulator and analyze the visibility of a car interior display under realistic lighting conditions.}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics)}, VOLUME = {28}, NUMBER = {2}, PAGES = {173--182}, }
Endnote
%0 Journal Article %A Aydin, Tunc O. %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Predicting Display Visibility Under Dynamically Changing Lighting Conditions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-19CB-C %F EDOC: 520442 %F OTHER: Local-ID: C125675300671F7B-33AE0A5CE1E47467C125755C00347B6E-Tunc2009EG %D 2009 %* Review method: peer-reviewed %X Display devices, more than ever, are finding their ways into electronic consumer goods as a result of recent trends in providing more functionality and user interaction. Combined with the new developments in display technology towards higher reproducible luminance range, the mobility and variation in capability of display devices are constantly increasing. Consequently, in real life usage it is now very likely that the display emission to be distorted by spatially and temporally varying reflections, and the observer's visual system to be not adapted to the particular display that she is viewing at that moment. The actual perception of the display content cannot be fully understood by only considering steady-state illumination and adaptation conditions. We propose an objective method for display visibility analysis formulating the problem as a full-reference image quality assessment problem, where the display emission under ideal'' conditions is used as the reference for real-life conditions. Our work includes a human visual system model that accounts for maladaptation and temporal recovery of sensitivity. As an example application we integrate our method to a global illumination simulator and analyze the visibility of a car interior display under realistic lighting conditions. %J Computer Graphics Forum (Proc. Eurographics) %V 28 %N 2 %& 173 %P 173 - 182
Banterle, F., Debattista, K., Artusi, A., et al. 2009. High Dynamic Range Imaging and LDR Expansion for Generating HDR Content. EUROGRAPHICS State-of-the-Art Report, Eurographics.
Abstract
In the last few years researches in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding LDR content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics. Moreover, we are proposing how to classify and to validate them. We will discuss limitations of these methods, and identify remaining challenges for the future.
Export
BibTeX
@inproceedings{Banterle2009, TITLE = {High Dynamic Range Imaging and {LDR} Expansion for Generating {HDR} Content}, AUTHOR = {Banterle, Francesco and Debattista, Kurt and Artusi, Alessandro and Pattanaik, Sumanta and Myszkowski, Karol and Ledda, Patrick and Bloj, Marina and Chalmers, Alan}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-C54556685BC86D61C125755C005A9EBC-Banterle2009}, PUBLISHER = {Eurographics}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {In the last few years researches in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding LDR content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics. Moreover, we are proposing how to classify and to validate them. We will discuss limitations of these methods, and identify remaining challenges for the future.}, BOOKTITLE = {EUROGRAPHICS State-of-the-Art Report}, EDITOR = {Pauly, Marc and Greiner, G{\"u}nther}, PAGES = {17--44}, }
Endnote
%0 Conference Proceedings %A Banterle, Francesco %A Debattista, Kurt %A Artusi, Alessandro %A Pattanaik, Sumanta %A Myszkowski, Karol %A Ledda, Patrick %A Bloj, Marina %A Chalmers, Alan %+ Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging and LDR Expansion for Generating HDR Content : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-19B1-4 %F EDOC: 520496 %F OTHER: Local-ID: C125675300671F7B-C54556685BC86D61C125755C005A9EBC-Banterle2009 %I Eurographics %D 2009 %B Untitled Event %Z date of event: 2009-03-30 - 2009-04-03 %C Munich %X In the last few years researches in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding LDR content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics. Moreover, we are proposing how to classify and to validate them. We will discuss limitations of these methods, and identify remaining challenges for the future. %B EUROGRAPHICS State-of-the-Art Report %E Pauly, Marc; Greiner, G&#252;nther %P 17 - 44 %I Eurographics
Didyk, P., Eisemann, E., Ritschel, T., Myszkowski, K., and Seidel, H.-P. 2009. A Question of Time: Importance and Possibilities of High Refresh-rates. Visual Computing Research Conference, Intel Visual Computing Institute.
Abstract
This work will discuss shortcomings of traditional rendering techniques on today's wide-spread LCD screens. The main observation is that 3D renderings often appear blurred when observed on such a display. Although this might seem to be a shortcoming of the hardware, such blur is actually a consequence of the human visual system perceiving such displays.\\ In this work, we introduce a perception-aware rendering technique that is of very low cost, but significantly improves performance, as well as quality. Especially in conjunction with more recent devices, initially conceived for 3D shutter glasses, our approach achieves significant gains. Besides quality, we show that such approaches even improve task-performance which makes it a crucial component for future interactive applications.
Export
BibTeX
@inproceedings{Didyk2009, TITLE = {A Question of Time: Importance and Possibilities of High Refresh-rates}, AUTHOR = {Didyk, Piotr and Eisemann, Elmar and Ritschel, Tobias and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-6F99D73D0B04CA52C12576B9005417E0-Didyk2009}, PUBLISHER = {Intel Visual Computing Institute}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {This work will discuss shortcomings of traditional rendering techniques on today's wide-spread LCD screens. The main observation is that 3D renderings often appear blurred when observed on such a display. Although this might seem to be a shortcoming of the hardware, such blur is actually a consequence of the human visual system perceiving such displays.\\ In this work, we introduce a perception-aware rendering technique that is of very low cost, but significantly improves performance, as well as quality. Especially in conjunction with more recent devices, initially conceived for 3D shutter glasses, our approach achieves significant gains. Besides quality, we show that such approaches even improve task-performance which makes it a crucial component for future interactive applications.}, BOOKTITLE = {Visual Computing Research Conference}, PAGES = {1--3}, }
Endnote
%0 Conference Proceedings %A Didyk, Piotr %A Eisemann, Elmar %A Ritschel, Tobias %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Question of Time: Importance and Possibilities of High Refresh-rates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-197D-B %F EDOC: 520455 %F OTHER: Local-ID: C125675300671F7B-6F99D73D0B04CA52C12576B9005417E0-Didyk2009 %I Intel Visual Computing Institute %D 2009 %B Untitled Event %Z date of event: 2009-12-08 - 2009-12-10 %C 8-10 December 2009 %X This work will discuss shortcomings of traditional rendering techniques on today's wide-spread LCD screens. The main observation is that 3D renderings often appear blurred when observed on such a display. Although this might seem to be a shortcoming of the hardware, such blur is actually a consequence of the human visual system perceiving such displays.\\ In this work, we introduce a perception-aware rendering technique that is of very low cost, but significantly improves performance, as well as quality. Especially in conjunction with more recent devices, initially conceived for 3D shutter glasses, our approach achieves significant gains. Besides quality, we show that such approaches even improve task-performance which makes it a crucial component for future interactive applications. %B Visual Computing Research Conference %P 1 - 3 %I Intel Visual Computing Institute
Herzog, R., Myszkowski, K., and Seidel, H.-P. 2009. Anisotropic Radiance-Cache Splatting for Efficiently Computing High-Quality Global Illumination with Lightcuts. Computer Graphics Forum (Proc. EUROGRAPHICS), Wiley-Blackwell.
Abstract
Export
BibTeX
Endnote
Ihrke, M., Ritschel, T., Smith, K., Grosch, T., Myszkowski, K., and Seidel, H.-P. 2009. A Perceptual Evaluation of 3D Unsharp Masking. Human Vision and Electronic Imaging XIV, IS\&T/SPIE’s 21st Annual Symposium on Electronic Imaging, SPIE.
Abstract
Much research has gone into developing methods for enhancing the contrast of displayed 3D scenes. In the current study, we investigated the perceptual impact of an algorithm recently proposed by Ritschel et al.1 that provides a general technique for enhancing the perceived contrast in synthesized scenes. Their algorithm extends traditional image-based Unsharp Masking to a 3D scene, achieving a scene-coherent enhancement. We conducted a standardized perceptual experiment to test the proposition that a 3D unsharp enhanced scene was superior to the original scene in terms of perceived contrast and preference. Furthermore, the impact of different settings of the algorithm’s main parameters enhancement-strength (¸) and gradient size (¾) were studied in order to provide an estimate of a reasonable parameter space for the method. All participants preferred a clearly visible enhancement over the original, non-enhanced scenes and the setting for objectionable enhancement was far above the preferred settings. The effect of the gradient size ¾ was negligible. The general pattern found for the parameters provides a useful guideline for designers when making use of 3D Unsharp Masking: as a rule of thumb they can easily determine the strength for which they start to perceive an enhancement and use twice this value for a good effect. Since the value for objectionable results was twice as large again, artifacts should not impose restrictions on the applicability of this rule.
Export
BibTeX
@inproceedings{Ihrke2009SPIE, TITLE = {A Perceptual Evaluation of {3D} Unsharp Masking}, AUTHOR = {Ihrke, Matthias and Ritschel, Tobias and Smith, Kaleigh and Grosch, Thorsten and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {doi:10.1117/12.809026}, LOCALID = {Local-ID: C125675300671F7B-5AB79508CF9875C4C125755C0035BB4C-Ihrke2009SPIE}, PUBLISHER = {SPIE}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Much research has gone into developing methods for enhancing the contrast of displayed 3D scenes. In the current study, we investigated the perceptual impact of an algorithm recently proposed by Ritschel et al.1 that provides a general technique for enhancing the perceived contrast in synthesized scenes. Their algorithm extends traditional image-based Unsharp Masking to a 3D scene, achieving a scene-coherent enhancement. We conducted a standardized perceptual experiment to test the proposition that a 3D unsharp enhanced scene was superior to the original scene in terms of perceived contrast and preference. Furthermore, the impact of different settings of the algorithm{\textquoteright}s main parameters enhancement-strength (&#184;) and gradient size (&#190;) were studied in order to provide an estimate of a reasonable parameter space for the method. All participants preferred a clearly visible enhancement over the original, non-enhanced scenes and the setting for objectionable enhancement was far above the preferred settings. The effect of the gradient size &#190; was negligible. The general pattern found for the parameters provides a useful guideline for designers when making use of 3D Unsharp Masking: as a rule of thumb they can easily determine the strength for which they start to perceive an enhancement and use twice this value for a good effect. Since the value for objectionable results was twice as large again, artifacts should not impose restrictions on the applicability of this rule.}, BOOKTITLE = {Human Vision and Electronic Imaging XIV, IS{\textbackslash}\&T/SPIE's 21st Annual Symposium on Electronic Imaging}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N.}, PAGES = {72400R--1-12}, SERIES = {Annual Symposium on Electronic Imaging}, }
Endnote
%0 Conference Proceedings %A Ihrke, Matthias %A Ritschel, Tobias %A Smith, Kaleigh %A Grosch, Thorsten %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Perceptual Evaluation of 3D Unsharp Masking : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1975-C %F EDOC: 520494 %R doi:10.1117/12.809026 %F OTHER: Local-ID: C125675300671F7B-5AB79508CF9875C4C125755C0035BB4C-Ihrke2009SPIE %I SPIE %D 2009 %B Untitled Event %Z date of event: 2009-01-19 - 2009-01-22 %C San Jose, USA %X Much research has gone into developing methods for enhancing the contrast of displayed 3D scenes. In the current study, we investigated the perceptual impact of an algorithm recently proposed by Ritschel et al.1 that provides a general technique for enhancing the perceived contrast in synthesized scenes. Their algorithm extends traditional image-based Unsharp Masking to a 3D scene, achieving a scene-coherent enhancement. We conducted a standardized perceptual experiment to test the proposition that a 3D unsharp enhanced scene was superior to the original scene in terms of perceived contrast and preference. Furthermore, the impact of different settings of the algorithm&#8217;s main parameters enhancement-strength (&#184;) and gradient size (&#190;) were studied in order to provide an estimate of a reasonable parameter space for the method. All participants preferred a clearly visible enhancement over the original, non-enhanced scenes and the setting for objectionable enhancement was far above the preferred settings. The effect of the gradient size &#190; was negligible. The general pattern found for the parameters provides a useful guideline for designers when making use of 3D Unsharp Masking: as a rule of thumb they can easily determine the strength for which they start to perceive an enhancement and use twice this value for a good effect. Since the value for objectionable results was twice as large again, artifacts should not impose restrictions on the applicability of this rule. %B Human Vision and Electronic Imaging XIV, IS\&T/SPIE's 21st Annual Symposium on Electronic Imaging %E Rogowitz, Bernice E.; Pappas, Thrasyvoulos N. %P 72400R - 1-12 %I SPIE %B Annual Symposium on Electronic Imaging
Ritschel, T., Ihrke, M., Frisvad, J.R., Coppens, J., Myszkowski, K., and Seidel, H.-P. 2009. Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye. Computer Graphics Forum (Proc. Eurographics 2009) 28, 3.
Abstract
Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contrast. Even though most, if not all, subjects report perceiving glare as a bright pattern that fluctuates in time, up to now it has only been modeled as a static phenomenon. We argue that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic and attractive renderings of bright light sources. Based on the anatomy of the human eye, we propose a model that enables real-time simulation of dynamic glare on a GPU. This allows an improved depiction of HDR images on LDR media for interactive applications like games, feature films, or even by adding movement to initially static HDR images. By conducting psychophysical studies, we validate that our method improves perceived brightness and that dynamic glare-renderings are often perceived as more attractive depending on the chosen scene.
Export
BibTeX
@article{Ritschel2009EG, TITLE = {Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye}, AUTHOR = {Ritschel, Tobias and Ihrke, Matthias and Frisvad, Jeppe Revall and Coppens, Joris and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-C0AF37EF8D7C4059C125755C00337FD6-Ritschel2009EG}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contrast. Even though most, if not all, subjects report perceiving glare as a bright pattern that fluctuates in time, up to now it has only been modeled as a static phenomenon. We argue that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic and attractive renderings of bright light sources. Based on the anatomy of the human eye, we propose a model that enables real-time simulation of dynamic glare on a GPU. This allows an improved depiction of HDR images on LDR media for interactive applications like games, feature films, or even by adding movement to initially static HDR images. By conducting psychophysical studies, we validate that our method improves perceived brightness and that dynamic glare-renderings are often perceived as more attractive depending on the chosen scene.}, JOURNAL = {Computer Graphics Forum (Proc. Eurographics 2009)}, VOLUME = {28}, NUMBER = {3}, PAGES = {183--192}, }
Endnote
%0 Journal Article %A Ritschel, Tobias %A Ihrke, Matthias %A Frisvad, Jeppe Revall %A Coppens, Joris %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-19E7-E %F EDOC: 520489 %F OTHER: Local-ID: C125675300671F7B-C0AF37EF8D7C4059C125755C00337FD6-Ritschel2009EG %D 2009 %* Review method: peer-reviewed %X Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contrast. Even though most, if not all, subjects report perceiving glare as a bright pattern that fluctuates in time, up to now it has only been modeled as a static phenomenon. We argue that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic and attractive renderings of bright light sources. Based on the anatomy of the human eye, we propose a model that enables real-time simulation of dynamic glare on a GPU. This allows an improved depiction of HDR images on LDR media for interactive applications like games, feature films, or even by adding movement to initially static HDR images. By conducting psychophysical studies, we validate that our method improves perceived brightness and that dynamic glare-renderings are often perceived as more attractive depending on the chosen scene. %J Computer Graphics Forum (Proc. Eurographics 2009) %V 28 %N 3 %& 183 %P 183 - 192
2008
Aydin, T.O., Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2008. Dynamic Range Independent Image Quality Assessment. ACM Transactions on Graphics (Proc. ACM SIGGRAPH 2008) 27, 3.
Abstract
The diversity of display technologies and introduction of high dynamic range imagery introduces the necessity of comparing images of radically different dynamic ranges. Current quality assessment metrics are not suitable for this task, as they assume that both reference and test images have the same dynamic range. Image fidelity measures employed by a majority of current metrics, based on the difference of pixel intensity or contrast values between test and reference images, result in meaningless predictions if this assumption does not hold. We present a novel image quality metric capable of operating on an image pair where both images have arbitrary dynamic ranges. Our metric utilizes a model of the human visual system, and its central idea is a new definition of visible distortion based on the detection and classification of visible changes in the image structure. Our metric is carefully calibrated and its performance is validated through perceptual experiments. We demonstrate possible applications of our metric to the evaluation of direct and inverse tone mapping operators as well as the analysis of the image appearance on displays with various characteristics.
Export
BibTeX
@article{Tunc08SG, TITLE = {Dynamic Range Independent Image Quality Assessment}, AUTHOR = {Aydin, Tunc Ozan and Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0730-0301}, URL = {http://doi.acm.org/10.1145/1360612.1360668}, DOI = {10.1145/1360612.1360668}, LOCALID = {Local-ID: C125756E0038A185-155666108816CD9DC12574C500543902-Tunc08SG}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {The diversity of display technologies and introduction of high dynamic range imagery introduces the necessity of comparing images of radically different dynamic ranges. Current quality assessment metrics are not suitable for this task, as they assume that both reference and test images have the same dynamic range. Image fidelity measures employed by a majority of current metrics, based on the difference of pixel intensity or contrast values between test and reference images, result in meaningless predictions if this assumption does not hold. We present a novel image quality metric capable of operating on an image pair where both images have arbitrary dynamic ranges. Our metric utilizes a model of the human visual system, and its central idea is a new definition of visible distortion based on the detection and classification of visible changes in the image structure. Our metric is carefully calibrated and its performance is validated through perceptual experiments. We demonstrate possible applications of our metric to the evaluation of direct and inverse tone mapping operators as well as the analysis of the image appearance on displays with various characteristics.}, JOURNAL = {ACM Transactions on Graphics (Proc. ACM SIGGRAPH)}, VOLUME = {27}, NUMBER = {3}, PAGES = {1--10}, EID = {69}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2008}, EDITOR = {Turk, Greg}, }
Endnote
%0 Journal Article %A Aydin, Tunc Ozan %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Dynamic Range Independent Image Quality Assessment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1B77-7 %F EDOC: 427982 %R 10.1145/1360612.1360668 %U http://doi.acm.org/10.1145/1360612.1360668 %F OTHER: Local-ID: C125756E0038A185-155666108816CD9DC12574C500543902-Tunc08SG %D 2008 %X The diversity of display technologies and introduction of high dynamic range imagery introduces the necessity of comparing images of radically different dynamic ranges. Current quality assessment metrics are not suitable for this task, as they assume that both reference and test images have the same dynamic range. Image fidelity measures employed by a majority of current metrics, based on the difference of pixel intensity or contrast values between test and reference images, result in meaningless predictions if this assumption does not hold. We present a novel image quality metric capable of operating on an image pair where both images have arbitrary dynamic ranges. Our metric utilizes a model of the human visual system, and its central idea is a new definition of visible distortion based on the detection and classification of visible changes in the image structure. Our metric is carefully calibrated and its performance is validated through perceptual experiments. We demonstrate possible applications of our metric to the evaluation of direct and inverse tone mapping operators as well as the analysis of the image appearance on displays with various characteristics. %J ACM Transactions on Graphics %V 27 %N 3 %& 1 %P 1 - 10 %Z sequence number: 69 %I ACM %C New York, NY %@ false %B Proceedings of ACM SIGGRAPH 2008 %O ACM SIGGRAPH 2008 Los Angeles, CA
Creem-Regehr, S. and Myszkowski, K., eds. 2008. Symposium on Applied Perception in Graphics and Visualization : proceedings APGV 2008. ACM.
Export
BibTeX
@proceedings{Myszkowski2008APGV, TITLE = {Symposium on Applied Perception in Graphics and Visualization : proceedings APGV 2008}, EDITOR = {Creem-Regehr, Sarah and Myszkowski, Karol}, LANGUAGE = {eng}, ISBN = {978-1-59593-981-4}, LOCALID = {Local-ID: C125756E0038A185-8AFC12175FDF8F80C12574C500637171-Myszkowski2008APGV}, PUBLISHER = {ACM}, YEAR = {2008}, DATE = {2008}, PAGES = {157}, }
Endnote
%0 Conference Proceedings %E Creem-Regehr, Sarah %E Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Symposium on Applied Perception in Graphics and Visualization : proceedings APGV 2008 : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1D1A-D %F EDOC: 428021 %@ 978-1-59593-981-4 %F OTHER: Local-ID: C125756E0038A185-8AFC12175FDF8F80C12574C500637171-Myszkowski2008APGV %I ACM %D 2008 %B Untitled Event %Z date of event: 2008-08-09 - 2008-08-10 %D 2008 %C Los Angeles, CA %P 157
Herzog, R., Kinuwaki, S., Myszkowski, K., and Seidel, H.-P. 2008. Render2MPEG: A Perception-based Framework Towards Integrating Rendering and Video Compression. The European Association for Computer Graphics 29th Annual Conference, EUROGRAPHICS 2008, Blackwell.
Abstract
Currently 3D animation rendering and video compression are completely independent processes even if rendered frames are streamed on-the-fly within a client-server platform. In such scenario, which may involve time-varying transmission bandwidths and different display characteristics at the client side, dynamic adjustment of the rendering quality to such requirements can lead to a better use of server resources. In this work, we present a framework where the renderer and MPEG codec are coupled through a straightforward interface that provides precise motion vectors from the rendering side to the codec and perceptual error thresholds for each pixel in the opposite direction. The perceptual error thresholds take into account bandwidth-dependent quantization errors resulting from the lossy compression as well as image content-dependent luminance and spatial contrast masking. The availability of the discrete cosine transform (DCT) coefficients at the codec side enables to use advanced models of the human visual system (HVS) in the perceptual error threshold derivation without incurring any significant cost. Those error thresholds are then used to control the rendering quality and make it well aligned with the compressed stream quality. In our prototype system we use the lightcuts technique developed by Walter et al., which we enhance to handle dynamic image sequences, and an MPEG-2 implementation. Our results clearly demonstrate many advantages of coupling the rendering with video compression in terms of faster rendering. Furthermore, temporally coherent rendering leads to a reduction of temporal artifacts.
Export
BibTeX
@inproceedings{Herzog08EG, TITLE = {{Render2MPEG}: A Perception-based Framework Towards Integrating Rendering and Video Compression}, AUTHOR = {Herzog, Robert and Kinuwaki, Shinichi and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://dx.doi.org/10.1111/j.1467-8659.2008.01115.x}, DOI = {10.1111/j.1467-8659.2008.01115.x}, LOCALID = {Local-ID: C125756E0038A185-3B410E71DC037794C12574C5005576A5-Herzog08EG}, PUBLISHER = {Blackwell}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Currently 3D animation rendering and video compression are completely independent processes even if rendered frames are streamed on-the-fly within a client-server platform. In such scenario, which may involve time-varying transmission bandwidths and different display characteristics at the client side, dynamic adjustment of the rendering quality to such requirements can lead to a better use of server resources. In this work, we present a framework where the renderer and MPEG codec are coupled through a straightforward interface that provides precise motion vectors from the rendering side to the codec and perceptual error thresholds for each pixel in the opposite direction. The perceptual error thresholds take into account bandwidth-dependent quantization errors resulting from the lossy compression as well as image content-dependent luminance and spatial contrast masking. The availability of the discrete cosine transform (DCT) coefficients at the codec side enables to use advanced models of the human visual system (HVS) in the perceptual error threshold derivation without incurring any significant cost. Those error thresholds are then used to control the rendering quality and make it well aligned with the compressed stream quality. In our prototype system we use the lightcuts technique developed by Walter et al., which we enhance to handle dynamic image sequences, and an MPEG-2 implementation. Our results clearly demonstrate many advantages of coupling the rendering with video compression in terms of faster rendering. Furthermore, temporally coherent rendering leads to a reduction of temporal artifacts.}, BOOKTITLE = {The European Association for Computer Graphics 29th Annual Conference, EUROGRAPHICS 2008}, EDITOR = {Drettakis, George and Scopigno, Roberto}, PAGES = {183--192}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Herzog, Robert %A Kinuwaki, Shinichi %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Render2MPEG: A Perception-based Framework Towards Integrating Rendering and Video Compression : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1CD2-8 %F EDOC: 428103 %R 10.1111/j.1467-8659.2008.01115.x %U http://dx.doi.org/10.1111/j.1467-8659.2008.01115.x %F OTHER: Local-ID: C125756E0038A185-3B410E71DC037794C12574C5005576A5-Herzog08EG %I Blackwell %D 2008 %B Untitled Event %Z date of event: 2008-04-14 - 2008-04-14 %C Crete, Greece %X Currently 3D animation rendering and video compression are completely independent processes even if rendered frames are streamed on-the-fly within a client-server platform. In such scenario, which may involve time-varying transmission bandwidths and different display characteristics at the client side, dynamic adjustment of the rendering quality to such requirements can lead to a better use of server resources. In this work, we present a framework where the renderer and MPEG codec are coupled through a straightforward interface that provides precise motion vectors from the rendering side to the codec and perceptual error thresholds for each pixel in the opposite direction. The perceptual error thresholds take into account bandwidth-dependent quantization errors resulting from the lossy compression as well as image content-dependent luminance and spatial contrast masking. The availability of the discrete cosine transform (DCT) coefficients at the codec side enables to use advanced models of the human visual system (HVS) in the perceptual error threshold derivation without incurring any significant cost. Those error thresholds are then used to control the rendering quality and make it well aligned with the compressed stream quality. In our prototype system we use the lightcuts technique developed by Walter et al., which we enhance to handle dynamic image sequences, and an MPEG-2 implementation. Our results clearly demonstrate many advantages of coupling the rendering with video compression in terms of faster rendering. Furthermore, temporally coherent rendering leads to a reduction of temporal artifacts. %B The European Association for Computer Graphics 29th Annual Conference, EUROGRAPHICS 2008 %E Drettakis, George; Scopigno, Roberto %P 183 - 192 %I Blackwell %B Computer Graphics Forum
Mantiuk, R., Zdrojewska, D., Tomaszewska, A., Mantiuk, R., and Myszkowski, K. 2008. Selected Problems of High Dynamic Range Video Compression and GPU-based Contrast Domain Tone Mapping. Proceedings of the 24th Spring Conference on Computer Graphics (SCCG’08), SCCG.
Abstract
The main goal of High Dynamic Range Imaging (HDRI) is precise reproduction of real world appearance in terms of intensity levels and color gamut at all stages of image and video processing from acquisition to display. In our work, we investigate the problem of lossy HDR image and video compression and provide a number of novel solutions, which are optimized for storage efficiency or backward compatibility with existing compression standards. To take advantage of HDR information even for traditional low-dynamic range displays, we design tone mapping algorithms, which adjust HDR contrast ranges in a scene to those available in typical display devices.
Export
BibTeX
@inproceedings{Myszkowski2007, TITLE = {Selected Problems of High Dynamic Range Video Compression and {GPU}-based Contrast Domain Tone Mapping}, AUTHOR = {Mantiuk, Radoslaw and Zdrojewska, Dorota and Tomaszewska, Anna and Mantiuk, Rafa{\l} and Myszkowski, Karol}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-5A787107DA03F936C12574C5005043C7-Myszkowski2007}, PUBLISHER = {SCCG}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {The main goal of High Dynamic Range Imaging (HDRI) is precise reproduction of real world appearance in terms of intensity levels and color gamut at all stages of image and video processing from acquisition to display. In our work, we investigate the problem of lossy HDR image and video compression and provide a number of novel solutions, which are optimized for storage efficiency or backward compatibility with existing compression standards. To take advantage of HDR information even for traditional low-dynamic range displays, we design tone mapping algorithms, which adjust HDR contrast ranges in a scene to those available in typical display devices.}, BOOKTITLE = {Proceedings of the 24th Spring Conference on Computer Graphics (SCCG'08)}, EDITOR = {Myszkowski, Karol}, PAGES = {11--18}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Radoslaw %A Zdrojewska, Dorota %A Tomaszewska, Anna %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Selected Problems of High Dynamic Range Video Compression and GPU-based Contrast Domain Tone Mapping : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1CE4-F %F EDOC: 428158 %F OTHER: Local-ID: C125756E0038A185-5A787107DA03F936C12574C5005043C7-Myszkowski2007 %I SCCG %D 2008 %B Untitled Event %Z date of event: 2008-04-21 - 2008-04-23 %C Budmerice, Slovakia %X The main goal of High Dynamic Range Imaging (HDRI) is precise reproduction of real world appearance in terms of intensity levels and color gamut at all stages of image and video processing from acquisition to display. In our work, we investigate the problem of lossy HDR image and video compression and provide a number of novel solutions, which are optimized for storage efficiency or backward compatibility with existing compression standards. To take advantage of HDR information even for traditional low-dynamic range displays, we design tone mapping algorithms, which adjust HDR contrast ranges in a scene to those available in typical display devices. %B Proceedings of the 24th Spring Conference on Computer Graphics (SCCG'08) %E Myszkowski, Karol %P 11 - 18 %I SCCG
Myszkowski, K., ed. 2008. Proceedings of the 24th Spring Conference on Computer Graphics (SCCG ’08). SCCG, Bratislava.
Export
BibTeX
@book{Myszkowski2008SCCG, TITLE = {Proceedings of the 24th Spring Conference on Computer Graphics ({SCCG} '08)}, EDITOR = {Myszkowski, Karol}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-B374E4826C5F0B2EC12574C50063B91D-Myszkowski2008SCCG}, PUBLISHER = {SCCG}, ADDRESS = {Bratislava}, YEAR = {2008}, DATE = {2008}, PAGES = {211}, }
Endnote
%0 Edited Book %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Proceedings of the 24th Spring Conference on Computer Graphics (SCCG '08) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1CB7-6 %F EDOC: 428174 %F OTHER: Local-ID: C125756E0038A185-B374E4826C5F0B2EC12574C50063B91D-Myszkowski2008SCCG %I SCCG %C Bratislava %D 2008 %P 211
Myszkowski, K., Mantiuk, R., and Krawczyk, G. 2008. High Dynamic Range Video. Morgan & Claypool Publishers, San Rafael, USA.
Abstract
As new displays and cameras offer enhanced color capabilities, there is a need to extend the precision of digital content. High Dynamic Range (HDR) imaging encodes images and video with higher than normal 8 bit-per-color-channel precision, enabling representation of the complete color gamut and the full visible range of luminance.However, to realize transition from the traditional toHDRimaging, it is necessary to develop imaging algorithms that work with the high-precision data. Tomake such algorithms effective and feasible in practice, it is necessary to take advantage of the limitations of the human visual system by aligning the data shortcomings to those of the human eye, thus limiting storage and processing precision. Therefore, human visual perception is the key component of the solutions we discuss in this book. This book presents a complete pipeline for HDR image and video processing from acquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, including the aspect of backward compatibility with existing formats. Finally, we review existing HDR display technologies and the associated problems of image contrast and brightness adjustment. For this purpose tone mapping is employed to accommodate HDR content to LDR devices. Conversely, the so-called inverse tone mapping is required to upgrade LDR content for displaying on HDR devices. We overview HDR-enabled image and video quality metrics, which are needed to verify algorithms at all stages of the pipeline. Additionally, we cover successful examples of the HDR technology applications, in particular, in computer graphics and computer vision. The goal of this book is to present all discussed components of the HDR pipeline with the main focus on video. For some pipeline stages HDR video solutions are either not well established or do not exist at all, in which case we describe techniques for single HDR images. In such cases we attempt to select the techniques, which can be extended into temporal domain. Whenever needed, relevant background information on human perception is given, which enables better understanding of the design choices behind the discussed algorithms and HDR equipment. Table of Contents: Introduction / Representation of an HDR Image / HDR Image and Video Acquisition / HDR Image Quality / HDR Image, Video, and Texture Compression / Tone Reproduction / HDR Display Devices / LDR2HDR: Recovering Dynamic Range in Legacy Content / HDRI in Computer Graphics / Software
Export
BibTeX
Endnote
%0 Book %A Myszkowski, Karol %A Mantiuk, Rafa&#322; %A Krawczyk, Grzegorz %+ Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1BD7-2 %F EDOC: 428175 %@ 9781598292145 %U http://dx.doi.org/10.2200/S00109ED1V01Y200806CGR005 %F OTHER: Local-ID: C125756E0038A185-B9003D1C8852615FC12574C50051C1EE-Myszkowski2008 %I Morgan & Claypool Publishers %C San Rafael, USA %D 2008 %P 158 %X As new displays and cameras offer enhanced color capabilities, there is a need to extend the precision of digital content. High Dynamic Range (HDR) imaging encodes images and video with higher than normal 8 bit-per-color-channel precision, enabling representation of the complete color gamut and the full visible range of luminance.However, to realize transition from the traditional toHDRimaging, it is necessary to develop imaging algorithms that work with the high-precision data. Tomake such algorithms effective and feasible in practice, it is necessary to take advantage of the limitations of the human visual system by aligning the data shortcomings to those of the human eye, thus limiting storage and processing precision. Therefore, human visual perception is the key component of the solutions we discuss in this book. This book presents a complete pipeline for HDR image and video processing from acquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, including the aspect of backward compatibility with existing formats. Finally, we review existing HDR display technologies and the associated problems of image contrast and brightness adjustment. For this purpose tone mapping is employed to accommodate HDR content to LDR devices. Conversely, the so-called inverse tone mapping is required to upgrade LDR content for displaying on HDR devices. We overview HDR-enabled image and video quality metrics, which are needed to verify algorithms at all stages of the pipeline. Additionally, we cover successful examples of the HDR technology applications, in particular, in computer graphics and computer vision. The goal of this book is to present all discussed components of the HDR pipeline with the main focus on video. For some pipeline stages HDR video solutions are either not well established or do not exist at all, in which case we describe techniques for single HDR images. In such cases we attempt to select the techniques, which can be extended into temporal domain. Whenever needed, relevant background information on human perception is given, which enables better understanding of the design choices behind the discussed algorithms and HDR equipment. Table of Contents: Introduction / Representation of an HDR Image / HDR Image and Video Acquisition / HDR Image Quality / HDR Image, Video, and Texture Compression / Tone Reproduction / HDR Display Devices / LDR2HDR: Recovering Dynamic Range in Legacy Content / HDRI in Computer Graphics / Software
Ritschel, T., Smith, K., Ihrke, M., Grosch, T., Myszkowski, K., and Seidel, H.-P. 2008. 3D Unsharp Masking for Scene Coherent Enhancement. Proceedings of ACM SIGGRAPH 2008, ACM.
Export
BibTeX
@inproceedings{Ritschel08Sig, TITLE = {{3D} Unsharp Masking for Scene Coherent Enhancement}, AUTHOR = {Ritschel, Tobias and Smith, Kaleigh and Ihrke, Matthias and Grosch, Thorsten and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://doi.acm.org/10.1145/1360612.1360689}, DOI = {10.1145/1360612.1360689}, LOCALID = {Local-ID: C125756E0038A185-41E8E32E3589C504C12574C500535A27-Ritschel08Sig}, PUBLISHER = {ACM}, YEAR = {2008}, DATE = {2008}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2008}, EDITOR = {Turk, Greg}, PAGES = {Art.90.1--8}, SERIES = {ACM Transactions on Graphics}, }
Endnote
%0 Conference Proceedings %A Ritschel, Tobias %A Smith, Kaleigh %A Ihrke, Matthias %A Grosch, Thorsten %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T 3D Unsharp Masking for Scene Coherent Enhancement : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AB8-0 %F EDOC: 428200 %R 10.1145/1360612.1360689 %U http://doi.acm.org/10.1145/1360612.1360689 %F OTHER: Local-ID: C125756E0038A185-41E8E32E3589C504C12574C500535A27-Ritschel08Sig %I ACM %D 2008 %B Untitled Event %Z date of event: 2008-08-11 - 2008-08-15 %C Los Angeles, USA %B Proceedings of ACM SIGGRAPH 2008 %E Turk, Greg %P Art.90.1 - 8 %I ACM %B ACM Transactions on Graphics
Smith, K., Landes, P.-E., Thollot, J., and Myszkowski, K. 2008. Apparent Greyscale: A Simple and Fast Conversion to Perceptually Accurate Images and Video. The European Association for Computer Graphics 29th Annual Conference, EUROGRAPHICS 2008, Blackwell.
Abstract
This paper presents a quick and simple method for converting complex images and video to perceptually accurate greyscale versions. We use a two-step approach first to globally assign grey values and determine colour ordering, then second, to locally enhance the greyscale to reproduce the original contrast. Our global mapping is image independent and incorporates the Helmholtz-Kohlrausch colour appearance effect for predicting differences between isoluminant colours. Our multiscale local contrast enhancement reintroduces lost discontinuities only in regions that insufficiently represent original chromatic contrast. All operations are restricted so that they preserve the overall image appearance, lightness range and differences, colour ordering, and spatial details, resulting in perceptually accurate achromatic reproductions of the colour original.
Export
BibTeX
@inproceedings{Smith2008, TITLE = {Apparent Greyscale: A Simple and Fast Conversion to Perceptually Accurate Images and Video}, AUTHOR = {Smith, Kaleigh and Landes, Pierre-Edouard and Thollot, J{\"o}elle and Myszkowski, Karol}, LANGUAGE = {eng}, URL = {http://dx.doi.org/10.1111/j.1467-8659.2008.01116.x}, DOI = {10.1111/j.1467-8659.2008.01116.x}, LOCALID = {Local-ID: C125756E0038A185-E88EBE366EBA274FC1257495005568E5-Smith2008}, PUBLISHER = {Blackwell}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {This paper presents a quick and simple method for converting complex images and video to perceptually accurate greyscale versions. We use a two-step approach first to globally assign grey values and determine colour ordering, then second, to locally enhance the greyscale to reproduce the original contrast. Our global mapping is image independent and incorporates the Helmholtz-Kohlrausch colour appearance effect for predicting differences between isoluminant colours. Our multiscale local contrast enhancement reintroduces lost discontinuities only in regions that insufficiently represent original chromatic contrast. All operations are restricted so that they preserve the overall image appearance, lightness range and differences, colour ordering, and spatial details, resulting in perceptually accurate achromatic reproductions of the colour original.}, BOOKTITLE = {The European Association for Computer Graphics 29th Annual Conference, EUROGRAPHICS 2008}, EDITOR = {Drettakis, George and Scopigno, Roberto}, PAGES = {193--200}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Smith, Kaleigh %A Landes, Pierre-Edouard %A Thollot, J&#246;elle %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Apparent Greyscale: A Simple and Fast Conversion to Perceptually Accurate Images and Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AF9-D %F EDOC: 428226 %R 10.1111/j.1467-8659.2008.01116.x %U http://dx.doi.org/10.1111/j.1467-8659.2008.01116.x %F OTHER: Local-ID: C125756E0038A185-E88EBE366EBA274FC1257495005568E5-Smith2008 %I Blackwell %D 2008 %B Untitled Event %Z date of event: 2008-04-14 - 2008-04-18 %C Crete, Greece %X This paper presents a quick and simple method for converting complex images and video to perceptually accurate greyscale versions. We use a two-step approach first to globally assign grey values and determine colour ordering, then second, to locally enhance the greyscale to reproduce the original contrast. Our global mapping is image independent and incorporates the Helmholtz-Kohlrausch colour appearance effect for predicting differences between isoluminant colours. Our multiscale local contrast enhancement reintroduces lost discontinuities only in regions that insufficiently represent original chromatic contrast. All operations are restricted so that they preserve the overall image appearance, lightness range and differences, colour ordering, and spatial details, resulting in perceptually accurate achromatic reproductions of the colour original. %B The European Association for Computer Graphics 29th Annual Conference, EUROGRAPHICS 2008 %E Drettakis, George; Scopigno, Roberto %P 193 - 200 %I Blackwell %B Computer Graphics Forum
Yoshida, A., Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2008a. Perception-Based Contrast Enhancement Model for Complex Images in High Dynamic Range. Human Vision and Electronic Imaging XIII, SPIE.
Export
BibTeX
@inproceedings{Yoshida_SPIE2008, TITLE = {Perception-Based Contrast Enhancement Model for Complex Images in High Dynamic Range}, AUTHOR = {Yoshida, Akiko and Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-0-8194-6978-6}, URL = {http://dx.doi.org/10.1117/12.766500}, DOI = {10.1117/12.766500}, LOCALID = {Local-ID: C125756E0038A185-1AF67FD9509EB0FAC12573AF006318E7-Yoshida_SPIE2008}, PUBLISHER = {SPIE}, YEAR = {2008}, DATE = {2008}, BOOKTITLE = {Human Vision and Electronic Imaging XIII}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N.}, PAGES = {68060C--1-11}, SERIES = {Proceedings of SPIE-IS\&T Electronic Imaging}, }
Endnote
%0 Conference Proceedings %A Yoshida, Akiko %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perception-Based Contrast Enhancement Model for Complex Images in High Dynamic Range : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1CA2-3 %F EDOC: 428258 %R 10.1117/12.766500 %U http://dx.doi.org/10.1117/12.766500 %F OTHER: Local-ID: C125756E0038A185-1AF67FD9509EB0FAC12573AF006318E7-Yoshida_SPIE2008 %I SPIE %D 2008 %B Untitled Event %Z date of event: 2008-01-28 - 2008-01-31 %C San Jose, CA, USA %B Human Vision and Electronic Imaging XIII %E Rogowitz, Bernice E.; Pappas, Thrasyvoulos N. %P 68060C - 1-11 %I SPIE %@ 978-0-8194-6978-6 %B Proceedings of SPIE-IS&T Electronic Imaging
Yoshida, A., Ihrke, M., Mantiuk, R., and Seidel, H.-P. 2008b. Brightness of the Glare Illusion. Symposium on Applied Perception in Graphics and Visualization : proceedings APGV 2008, ACM.
Export
BibTeX
@inproceedings{Yoshida2008_APGV, TITLE = {Brightness of the Glare Illusion}, AUTHOR = {Yoshida, Akiko and Ihrke, Matthias and Mantiuk, Rafa{\l} and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-59593-981-4}, URL = {http://www.mpi-inf.mpg.de/~yoshida/Yoshida_APGV2008.pdf}, LOCALID = {Local-ID: C125756E0038A185-0747F286D3E9D7EDC12574410035A60A-Yoshida2008_APGV}, PUBLISHER = {ACM}, YEAR = {2008}, DATE = {2008}, BOOKTITLE = {Symposium on Applied Perception in Graphics and Visualization : proceedings APGV 2008}, EDITOR = {Creem-Regehr, Sarah and Myszkowski, Karol}, PAGES = {83--90}, }
Endnote
%0 Conference Proceedings %A Yoshida, Akiko %A Ihrke, Matthias %A Mantiuk, Rafa&#322; %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Brightness of the Glare Illusion : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1B26-E %F EDOC: 428257 %U http://www.mpi-inf.mpg.de/~yoshida/Yoshida_APGV2008.pdf %F OTHER: Local-ID: C125756E0038A185-0747F286D3E9D7EDC12574410035A60A-Yoshida2008_APGV %I ACM %D 2008 %B Untitled Event %Z date of event: 2008-08-09 - 2008-08-10 %C Los Angeles, CA, USA %B Symposium on Applied Perception in Graphics and Visualization : proceedings APGV 2008 %E Creem-Regehr, Sarah; Myszkowski, Karol %P 83 - 90 %I ACM %@ 978-1-59593-981-4
2007
Gösele, M. and Myszkowski, K. 2007. HDR Applications in Computer Graphics. In: High-Dynamic-Range (HDR) Vision. Springer, Berlin.
Export
BibTeX
@incollection{Myszkowski2007HDRbook1, TITLE = {{HDR} Applications in Computer Graphics}, AUTHOR = {G{\"o}sele, Michael and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {1437-0387}, ISBN = {978-3-540-44432-9; 978-3-540-44433-6}, DOI = {10.1007/978-3-540-44433-6_13}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2007}, DATE = {2007}, BOOKTITLE = {High-Dynamic-Range (HDR) Vision}, EDITOR = {Hoefflinger, Bernd}, PAGES = {193--210}, SERIES = {Springer Series in Advanced Microelectronics}, VOLUME = {26}, }
Endnote
%0 Book Section %A G&#246;sele, Michael %A Myszkowski, Karol %+ External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T HDR Applications in Computer Graphics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-D414-F %R 10.1007/978-3-540-44433-6_13 %D 2007 %B High-Dynamic-Range (HDR) Vision %E Hoefflinger, Bernd %P 193 - 210 %I Springer %C Berlin %@ 978-3-540-44432-9 978-3-540-44433-6 %S Springer Series in Advanced Microelectronics %N 26 %@ false
Herzog, R., Havran, V., Kinuwaki, S., Myszkowski, K., and Seidel, H.-P. 2007a. Global Illumination using Photon Ray Splatting. Max-Planck-Institut für Informatik, Saarbrücken, Germany.
Abstract
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.
Export
BibTeX
@techreport{HerzogReport2007, TITLE = {Global Illumination using Photon Ray Splatting}, AUTHOR = {Herzog, Robert and Havran, Vlastimil and Kinuwaki, Shinichi and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2007-4-007}, LOCALID = {Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Herzog, Robert %A Havran, Vlastimil %A Kinuwaki, Shinichi %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Global Illumination using Photon Ray Splatting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1F57-6 %F EDOC: 356502 %F OTHER: Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken, Germany %D 2007 %P 66 p. %X We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space. %B Research Report
Herzog, R., Havran, V., Kinuwaki, S., Myszkowski, K., and Seidel, H.-P. 2007b. Global Illumination using Photon Ray Splatting. Eurographics 2007, Blackwell.
Abstract
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.
Export
BibTeX
@inproceedings{HerzogEG2007, TITLE = {Global Illumination using Photon Ray Splatting}, AUTHOR = {Herzog, Robert and Havran, Vlastimil and Kinuwaki, Shinichi and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, LOCALID = {Local-ID: C12573CC004A8E26-922F7B2EB5B8D78CC12573C4004C5B93-HerzogEG2007}, PUBLISHER = {Blackwell}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.}, BOOKTITLE = {Eurographics 2007}, EDITOR = {Cohen-Or, Daniel and Slavik, Pavel}, PAGES = {503--513}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Herzog, Robert %A Havran, Vlastimil %A Kinuwaki, Shinichi %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Global Illumination using Photon Ray Splatting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1F5A-F %F EDOC: 356513 %F OTHER: Local-ID: C12573CC004A8E26-922F7B2EB5B8D78CC12573C4004C5B93-HerzogEG2007 %I Blackwell %D 2007 %B Untitled Event %Z date of event: 2007-09-03 - 2007-09-07 %C Prague, Czech Republic %X We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space. %B Eurographics 2007 %E Cohen-Or, Daniel; Slavik, Pavel %P 503 - 513 %I Blackwell %B Computer Graphics Forum %@ false
Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2007a. Contrast Restoration by Adaptive Countershading. Eurographics 2007, Blackwell.
Abstract
We address the problem of communicating contrasts in images degraded with respect to their original due to processing with computer graphics algorithms. Such degradation can happen during the tone mapping of high dynamic range images, or while rendering scenes with low contrast shaders or with poor lighting. Inspired by a family of known perceptual illusions: Craik-O'Brien-Cornsweet, we enhance contrasts by modulating brightness at the edges to create countershading profiles. We generalize unsharp masking by coupling it with a multi-resolution local contrast metric to automatically create the countershading profiles from the sub-band components which are individually adjusted to each corrected feature to best enhance contrast with respect to the reference. Additionally, we employ a visual detection model to assure that our enhancements are not perceived as objectionable halo artifacts. The overall appearance of images remains mostly unchanged and the enhancement is achieved within the available dynamic range. We use our method to post-correct tone mapped images and improve images using their depth information.
Export
BibTeX
Endnote
Krawczyk, G., Myszkowski, K., and Brosch, D. 2007b. HDR Tone Mapping. In: High-Dynamic-Range (HDR) Vision. Springer, Berlin.
Export
BibTeX
@incollection{Myszkowski2007HDRbook2, TITLE = {{HDR} Tone Mapping}, AUTHOR = {Krawczyk, Grzegorz and Myszkowski, Karol and Brosch, Daniel}, LANGUAGE = {eng}, ISSN = {1437-0387}, ISBN = {978-3-540-44432-9; 978-3-540-44433-6}, DOI = {10.1007/978-3-540-44433-6_11}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2007}, DATE = {2007}, BOOKTITLE = {High-Dynamic-Range (HDR) Vision}, EDITOR = {Hoefflinger, Bernd}, PAGES = {193--210}, SERIES = {Springer Series in Advanced Microelectronics}, VOLUME = {26}, }
Endnote
%0 Book Section %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Brosch, Daniel %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T HDR Tone Mapping : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-D418-7 %R 10.1007/978-3-540-44433-6_11 %D 2007 %B High-Dynamic-Range (HDR) Vision %E Hoefflinger, Bernd %P 193 - 210 %I Springer %C Berlin %@ 978-3-540-44432-9 978-3-540-44433-6 %S Springer Series in Advanced Microelectronics %N 26 %@ false
Lensch, H.P.A., Goesele, M., and Müller, G. 2007. Capturing Reflectance - From Theory to Practice. Eurographics 2007 Tutorial Notes, Eurographics Association.
Export
BibTeX
@inproceedings{Lensch:2007:CRT, TITLE = {Capturing Reflectance -- From Theory to Practice}, AUTHOR = {Lensch, Hendrik P. A. and Goesele, Michael and M{\"u}ller, Gero}, LANGUAGE = {eng}, ISSN = {1017-4656}, LOCALID = {Local-ID: C12573CC004A8E26-8516249336F3C8C9C12573C9003DA89D-Lensch:2007:CRT}, PUBLISHER = {Eurographics Association}, YEAR = {2007}, DATE = {2007}, BOOKTITLE = {Eurographics 2007 Tutorial Notes}, EDITOR = {Myszkowski, Karol and Havran, Vlastimil}, PAGES = {485--556}, ADDRESS = {Prague, Czech Republic}, }
Endnote
%0 Conference Proceedings %A Lensch, Hendrik P. A. %A Goesele, Michael %A M&#252;ller, Gero %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Capturing Reflectance - From Theory to Practice : Tutorial Notes EUROGRAPHICS 2007 Tutorial 6 %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1E7E-7 %F EDOC: 356507 %F OTHER: Local-ID: C12573CC004A8E26-8516249336F3C8C9C12573C9003DA89D-Lensch:2007:CRT %D 2007 %B Eurographics 2007 %Z date of event: 2007-09-03 - 2007-09-07 %C Prague, Czech Republic %B Eurographics 2007 Tutorial Notes %E Myszkowski, Karol; Havran, Vlastimil %P 485 - 556 %I Eurographics Association %@ false
Mantiuk, R., Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2007. High Dynamic Range Image and Video Compression - Fidelity Matching Human Visual Performance. IEEE International Conference on Image Processing 2007, ICIP 2007. - Vol. 1, IEEE.
Abstract
Vast majority of digital images and video material stored today can capture only a fraction of visual information visible to the human eye and does not offer sufficient quality to fully exploit capabilities of new display devices. High dynamic range (HDR) image and video formats encode the full visible range of luminance and color gamut, thus offering ultimate fidelity, limited only by the capabilities of the human eye and not by any existing technology. In this paper we demonstrate how existing image and video compression standards can be extended to encode HDR content efficiently. This is achieved by a custom color space for encoding HDR pixel values that is derived from the visual performance data. We also demonstrate how HDR image and video compression can be designed so that it is backward compatible with existing formats.
Export
BibTeX
@inproceedings{Mantiuk2007hdrivc, TITLE = {High Dynamic Range Image and Video Compression -- Fidelity Matching Human Visual Performance}, AUTHOR = {Mantiuk, Rafa{\l} and Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-4244-1437-6}, DOI = {10.1109/ICIP.2007.4378878}, LOCALID = {Local-ID: C12573CC004A8E26-8908FB59F4C64796C125739F003CC9EF-Mantiuk2007hdrivc}, PUBLISHER = {IEEE}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Vast majority of digital images and video material stored today can capture only a fraction of visual information visible to the human eye and does not offer sufficient quality to fully exploit capabilities of new display devices. High dynamic range (HDR) image and video formats encode the full visible range of luminance and color gamut, thus offering ultimate fidelity, limited only by the capabilities of the human eye and not by any existing technology. In this paper we demonstrate how existing image and video compression standards can be extended to encode HDR content efficiently. This is achieved by a custom color space for encoding HDR pixel values that is derived from the visual performance data. We also demonstrate how HDR image and video compression can be designed so that it is backward compatible with existing formats.}, BOOKTITLE = {IEEE International Conference on Image Processing 2007, ICIP 2007. -- Vol. 1}, PAGES = {9--12}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Rafa&#322; %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Image and Video Compression - Fidelity Matching Human Visual Performance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1F68-F %F EDOC: 356576 %R 10.1109/ICIP.2007.4378878 %F OTHER: Local-ID: C12573CC004A8E26-8908FB59F4C64796C125739F003CC9EF-Mantiuk2007hdrivc %I IEEE %D 2007 %B Untitled Event %Z date of event: 2007-09-16 - 2007-09-19 %C San Antonio, TX, USA %X Vast majority of digital images and video material stored today can capture only a fraction of visual information visible to the human eye and does not offer sufficient quality to fully exploit capabilities of new display devices. High dynamic range (HDR) image and video formats encode the full visible range of luminance and color gamut, thus offering ultimate fidelity, limited only by the capabilities of the human eye and not by any existing technology. In this paper we demonstrate how existing image and video compression standards can be extended to encode HDR content efficiently. This is achieved by a custom color space for encoding HDR pixel values that is derived from the visual performance data. We also demonstrate how HDR image and video compression can be designed so that it is backward compatible with existing formats. %B IEEE International Conference on Image Processing 2007, ICIP 2007. - Vol. 1 %P 9 - 12 %I IEEE %@ 978-1-4244-1437-6
Yoshida, A., Blanz, V., Myszkowski, K., and Seidel, H.-P. 2007a. Testing tone mapping operators with human-perceived reality. Journal of Electronic Imaging 16, 1.
Abstract
A number of successful tone mapping operators for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range (LDR) devices. They were inspired by fields as diverse as image processing, photographic practice, and modeling of the human visual systems (HVS). The variety of approaches calls for a systematic perceptual evaluation of their performance. We conduct a psychophysical experiment based on a direct comparison between the appearance of real-world scenes and HDR images of these scenes displayed on an LDR monitor. In our experiment, HDR images are tone mapped by seven existing tone mapping operators. The primary interest of this psychophysical experiment is to assess the differences in how tone mapped images are perceived by human observers and to find out which attributes of image appearance account for these differences when tone mapped images are compared directly with their corresponding real-world scenes rather than with each other. The human subjects rate image naturalness, overall contrast, overall brightness, and detail reproduction in dark and bright image regions with respect to the corresponding real-world scene. The results indicate substantial differences in perception of images produced by individual tone mapping operators. We observe a clear distinction between global and local operators in favor of the latter, and we classify the tone mapping operators according to naturalness and appearance attributes.
Export
BibTeX
@article{Yoshida_JEI2007, TITLE = {Testing tone mapping operators with human-perceived reality}, AUTHOR = {Yoshida, Akiko and Blanz, Volker and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {1017-9909}, DOI = {10.1117/1.2711822}, LOCALID = {Local-ID: C12573CC004A8E26-1BC207A1242FDBC1C1257222003A5012-Yoshida_JEI2007}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {A number of successful tone mapping operators for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range (LDR) devices. They were inspired by fields as diverse as image processing, photographic practice, and modeling of the human visual systems (HVS). The variety of approaches calls for a systematic perceptual evaluation of their performance. We conduct a psychophysical experiment based on a direct comparison between the appearance of real-world scenes and HDR images of these scenes displayed on an LDR monitor. In our experiment, HDR images are tone mapped by seven existing tone mapping operators. The primary interest of this psychophysical experiment is to assess the differences in how tone mapped images are perceived by human observers and to find out which attributes of image appearance account for these differences when tone mapped images are compared directly with their corresponding real-world scenes rather than with each other. The human subjects rate image naturalness, overall contrast, overall brightness, and detail reproduction in dark and bright image regions with respect to the corresponding real-world scene. The results indicate substantial differences in perception of images produced by individual tone mapping operators. We observe a clear distinction between global and local operators in favor of the latter, and we classify the tone mapping operators according to naturalness and appearance attributes.}, JOURNAL = {Journal of Electronic Imaging}, VOLUME = {16}, NUMBER = {1}, PAGES = {1--14}, EID = {013004}, }
Endnote
%0 Journal Article %A Yoshida, Akiko %A Blanz, Volker %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Testing tone mapping operators with human-perceived reality : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-20EF-9 %F EDOC: 356603 %R 10.1117/1.2711822 %F OTHER: Local-ID: C12573CC004A8E26-1BC207A1242FDBC1C1257222003A5012-Yoshida_JEI2007 %D 2007 %* Review method: peer-reviewed %X A number of successful tone mapping operators for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range (LDR) devices. They were inspired by fields as diverse as image processing, photographic practice, and modeling of the human visual systems (HVS). The variety of approaches calls for a systematic perceptual evaluation of their performance. We conduct a psychophysical experiment based on a direct comparison between the appearance of real-world scenes and HDR images of these scenes displayed on an LDR monitor. In our experiment, HDR images are tone mapped by seven existing tone mapping operators. The primary interest of this psychophysical experiment is to assess the differences in how tone mapped images are perceived by human observers and to find out which attributes of image appearance account for these differences when tone mapped images are compared directly with their corresponding real-world scenes rather than with each other. The human subjects rate image naturalness, overall contrast, overall brightness, and detail reproduction in dark and bright image regions with respect to the corresponding real-world scene. The results indicate substantial differences in perception of images produced by individual tone mapping operators. We observe a clear distinction between global and local operators in favor of the latter, and we classify the tone mapping operators according to naturalness and appearance attributes. %J Journal of Electronic Imaging %V 16 %N 1 %& 1 %P 1 - 14 %Z sequence number: 013004 %@ false
Yoshida, A., Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2007b. Perceptual Uniformity of Contrast Scaling in Complex Images. APGV 2007, Symposium on Applied Perception in Graphics and Visualization, ACM.
Export
BibTeX
@inproceedings{Yoshida_APGV2007, TITLE = {Perceptual Uniformity of Contrast Scaling in Complex Images}, AUTHOR = {Yoshida, Akiko and Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {978-1-59593-670-7}, DOI = {10.1145/1272582.1272614}, PUBLISHER = {ACM}, YEAR = {2007}, DATE = {2007}, BOOKTITLE = {APGV 2007, Symposium on Applied Perception in Graphics and Visualization}, EDITOR = {Wallraven, Christian and Sundstedt, Veronica and Fleming, Roland W. and Langer, Michael and Spencer, Stephen N.}, PAGES = {137--137}, ADDRESS = {T{\"u}bingen, Germany}, }
Endnote
%0 Conference Proceedings %A Yoshida, Akiko %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Uniformity of Contrast Scaling in Complex Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-D1B0-5 %R 10.1145/1272582.1272614 %D 2007 %B Symposium on Applied Perception in Graphics and Visualization %Z date of event: 2007-07-25 - 2007-07-27 %C T&#252;bingen, Germany %B APGV 2007 %E Wallraven, Christian; Sundstedt, Veronica; Fleming, Roland W.; Langer, Michael; Spencer, Stephen N. %P 137 - 137 %I ACM %@ 978-1-59593-670-7
2006
Efremov, A., Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2006. Design and evaluation of backward compatible high dynamic range video compression. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices.
Export
BibTeX
@techreport{EfremovMantiukMyszkowskiSeidel, TITLE = {Design and evaluation of backward compatible high dynamic range video compression}, AUTHOR = {Efremov, Alexander and Mantiuk, Rafal and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001}, NUMBER = {MPI-I-2006-4-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Efremov, Alexander %A Mantiuk, Rafal %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Design and evaluation of backward compatible high dynamic range video compression : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6811-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2006 %P 50 p. %X In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices. %B Research Report / Max-Planck-Institut f&#252;r Informatik
Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2006. Computational Model of Lightness Perception in High Dynamic Range Imaging. Human Vision and Electronic Imaging X, IS&T/SPIE’s 18th Annual Symposium on Electronic Imaging (2006), SPIE.
Abstract
An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.
Export
BibTeX
@inproceedings{Krawczyk2006spie, TITLE = {Computational Model of Lightness Perception in High Dynamic Range Imaging}, AUTHOR = {Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and Daly, Scott J.}, LANGUAGE = {eng}, ISSN = {0277-786X}, LOCALID = {Local-ID: C125675300671F7B-E9AB6DE505E34EABC1257149002AB5F8-Krawczyk2006spie}, PUBLISHER = {SPIE}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.}, BOOKTITLE = {Human Vision and Electronic Imaging X, IS\&T/SPIE's 18th Annual Symposium on Electronic Imaging (2006)}, PAGES = {1--12}, SERIES = {SPIE}, }
Endnote
%0 Conference Proceedings %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %E Rogowitz, Bernice E. %E Pappas, Thrasyvoulos N. %E Daly, Scott J. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Computational Model of Lightness Perception in High Dynamic Range Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2258-5 %F EDOC: 314537 %F OTHER: Local-ID: C125675300671F7B-E9AB6DE505E34EABC1257149002AB5F8-Krawczyk2006spie %I SPIE %D 2006 %B Untitled Event %Z date of event: 2006-01-15 - %C San Jose, CA, USA %X An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception. %B Human Vision and Electronic Imaging X, IS&T/SPIE's 18th Annual Symposium on Electronic Imaging (2006) %P 1 - 12 %I SPIE %B SPIE %@ false
Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2006a. Lossy compression of high dynamic range images and video. Human Vision and Electronic Imaging XI, SPIE.
Abstract
Most common image and video formats have been designed to work with existing output devices, like LCD or CRT monitors. As display technology makes progress, these formats no longer represent the data that new devices can display. Therefore a shift towards higher precision image and video formats is imminent. To overcome limitations of common image and video formats, such as JPEG, PNG or MPEG, we propose a novel color space, which can accommodate an extended dynamic range and guarantees the precision that is below the visibility threshold. The proposed color space, which is derived from contrast detection data, can represent the full range of luminance values and the complete color gamut that is visible to the human eye. We show that only minor changes are required to the existing encoding algorithms to accommodate the new color space and therefore greatly enhance information content of the visual data. We demonstrate this with two compression algorithms for High Dynamic Range (HDR) visual data: for static images and for video. We argue that the proposed HDR representation is a simple and universal way to encode visual data independent of the display or capture technology.
Export
BibTeX
@inproceedings{Mantiuk2005:LossyCompression, TITLE = {Lossy compression of high dynamic range images and video}, AUTHOR = {Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and Daly, Scott J.}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-313F8F727ABF44C0C125713800369E82-Mantiuk2005:LossyCompression}, PUBLISHER = {SPIE}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Most common image and video formats have been designed to work with existing output devices, like LCD or CRT monitors. As display technology makes progress, these formats no longer represent the data that new devices can display. Therefore a shift towards higher precision image and video formats is imminent. To overcome limitations of common image and video formats, such as JPEG, PNG or MPEG, we propose a novel color space, which can accommodate an extended dynamic range and guarantees the precision that is below the visibility threshold. The proposed color space, which is derived from contrast detection data, can represent the full range of luminance values and the complete color gamut that is visible to the human eye. We show that only minor changes are required to the existing encoding algorithms to accommodate the new color space and therefore greatly enhance information content of the visual data. We demonstrate this with two compression algorithms for High Dynamic Range (HDR) visual data: for static images and for video. We argue that the proposed HDR representation is a simple and universal way to encode visual data independent of the display or capture technology.}, BOOKTITLE = {Human Vision and Electronic Imaging XI}, SERIES = {SPIE}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %E Rogowitz, Bernice E. %E Pappas, Thrasyvoulos N. %E Daly, Scott J. %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Lossy compression of high dynamic range images and video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-235C-8 %F EDOC: 314546 %F OTHER: Local-ID: C125675300671F7B-313F8F727ABF44C0C125713800369E82-Mantiuk2005:LossyCompression %I SPIE %D 2006 %B Untitled Event %Z date of event: 2006-01-15 - %C San Jose, USA %X Most common image and video formats have been designed to work with existing output devices, like LCD or CRT monitors. As display technology makes progress, these formats no longer represent the data that new devices can display. Therefore a shift towards higher precision image and video formats is imminent. To overcome limitations of common image and video formats, such as JPEG, PNG or MPEG, we propose a novel color space, which can accommodate an extended dynamic range and guarantees the precision that is below the visibility threshold. The proposed color space, which is derived from contrast detection data, can represent the full range of luminance values and the complete color gamut that is visible to the human eye. We show that only minor changes are required to the existing encoding algorithms to accommodate the new color space and therefore greatly enhance information content of the visual data. We demonstrate this with two compression algorithms for High Dynamic Range (HDR) visual data: for static images and for video. We argue that the proposed HDR representation is a simple and universal way to encode visual data independent of the display or capture technology. %B Human Vision and Electronic Imaging XI %I SPIE %B SPIE
Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2006b. A Perceptual Framework for Contrast Processing of High Dynamic Range Images. ACM Transactions on Applied Perception 3.
Abstract
Image processing often involves an image transformation into a domain that is better correlated with visual perception, such as the wavelet domain, image pyramids, multi-scale contrast representations, contrast in retinex algorithms, and chroma, lightness and colorfulness predictors in color appearance models. Many of these transformations are not ideally suited for image processing that significantly modifies an image. For example, the modification of a single band in a multi-scale model leads to an unrealistic image with severe halo artifacts. Inspired by gradient domain methods we derive a framework that imposes constraints on the entire set of contrasts in an image for a full range of spatial frequencies. This way, even severe image modifications do not reverse the polarity of contrast. The strengths of the framework are demonstrated by aggressive contrast enhancement and a visually appealing tone mapping which does not introduce artifacts. Additionally, we perceptually linearize contrast magnitudes using a custom transducer function. The transducer function has been derived especially for the purpose of HDR images, based on the contrast discrimination measurements for high contrast stimuli.
Export
BibTeX
@article{Mantiuk2006:ContrastDomain, TITLE = {A Perceptual Framework for Contrast Processing of High Dynamic Range Images}, AUTHOR = {Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-43FC98F7A2FC192EC1257149002E3B9A-Mantiuk2006:ContrastDomain}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Image processing often involves an image transformation into a domain that is better correlated with visual perception, such as the wavelet domain, image pyramids, multi-scale contrast representations, contrast in retinex algorithms, and chroma, lightness and colorfulness predictors in color appearance models. Many of these transformations are not ideally suited for image processing that significantly modifies an image. For example, the modification of a single band in a multi-scale model leads to an unrealistic image with severe halo artifacts. Inspired by gradient domain methods we derive a framework that imposes constraints on the entire set of contrasts in an image for a full range of spatial frequencies. This way, even severe image modifications do not reverse the polarity of contrast. The strengths of the framework are demonstrated by aggressive contrast enhancement and a visually appealing tone mapping which does not introduce artifacts. Additionally, we perceptually linearize contrast magnitudes using a custom transducer function. The transducer function has been derived especially for the purpose of HDR images, based on the contrast discrimination measurements for high contrast stimuli.}, JOURNAL = {ACM Transactions on Applied Perception}, VOLUME = {3}, PAGES = {286--308}, }
Endnote
%0 Journal Article %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A Perceptual Framework for Contrast Processing of High Dynamic Range Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2214-E %F EDOC: 314382 %F OTHER: Local-ID: C125675300671F7B-43FC98F7A2FC192EC1257149002E3B9A-Mantiuk2006:ContrastDomain %D 2006 %* Review method: peer-reviewed %X Image processing often involves an image transformation into a domain that is better correlated with visual perception, such as the wavelet domain, image pyramids, multi-scale contrast representations, contrast in retinex algorithms, and chroma, lightness and colorfulness predictors in color appearance models. Many of these transformations are not ideally suited for image processing that significantly modifies an image. For example, the modification of a single band in a multi-scale model leads to an unrealistic image with severe halo artifacts. Inspired by gradient domain methods we derive a framework that imposes constraints on the entire set of contrasts in an image for a full range of spatial frequencies. This way, even severe image modifications do not reverse the polarity of contrast. The strengths of the framework are demonstrated by aggressive contrast enhancement and a visually appealing tone mapping which does not introduce artifacts. Additionally, we perceptually linearize contrast magnitudes using a custom transducer function. The transducer function has been derived especially for the purpose of HDR images, based on the contrast discrimination measurements for high contrast stimuli. %J ACM Transactions on Applied Perception %V 3 %& 286 %P 286 - 308
Mantiuk, R., Efremov, A., Myszkowski, K., and Seidel, H.-P. 2006c. Backward Compatible High Dynamic Range MPEG Video Compression. Proceedings of ACM SIGGRAPH 2006, ACM.
Abstract
To embrace the imminent transition from traditional low-contrast video (LDR) content to superior high dynamic range (HDR) content, we propose a novel backward compatible HDR video compression (HDR~MPEG) method. We introduce a compact reconstruction function that is used to decompose an HDR video stream into a residual stream and a standard LDR stream, which can be played on existing MPEG decoders, such as DVD players. The reconstruction function is finely tuned to the content of each HDR frame to achieve strong decorrelation between the LDR and residual streams, which minimizes the amount of redundant information. The size of the residual stream is further reduced by removing invisible details prior to compression using our HDR-enabled filter, which models luminance adaptation, contrast sensitivity, and visual masking based on the HDR content. Designed especially for DVD movie distribution, our HDR~MPEG compression method features low storage requirements for HDR content resulting in a 30\% size increase to an LDR video sequence. The proposed compression method does not impose restrictions or modify the appearance of the LDR or HDR video. This is important for backward compatibility of the LDR stream with current DVD appearance, and also enables independent fine tuning, tone mapping, and color grading of both streams.
Export
BibTeX
@inproceedings{Mantiuk2006:hdrmpeg, TITLE = {Backward Compatible High Dynamic Range {MPEG} Video Compression}, AUTHOR = {Mantiuk, Rafa{\l} and Efremov, Alexander and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Dorsey, Julie}, LANGUAGE = {eng}, ISSN = {0730-0301}, LOCALID = {Local-ID: C125675300671F7B-1B2B94EF48903F44C1257149002EEC16-Mantiuk2006:hdrmpeg}, PUBLISHER = {ACM}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {To embrace the imminent transition from traditional low-contrast video (LDR) content to superior high dynamic range (HDR) content, we propose a novel backward compatible HDR video compression (HDR~MPEG) method. We introduce a compact reconstruction function that is used to decompose an HDR video stream into a residual stream and a standard LDR stream, which can be played on existing MPEG decoders, such as DVD players. The reconstruction function is finely tuned to the content of each HDR frame to achieve strong decorrelation between the LDR and residual streams, which minimizes the amount of redundant information. The size of the residual stream is further reduced by removing invisible details prior to compression using our HDR-enabled filter, which models luminance adaptation, contrast sensitivity, and visual masking based on the HDR content. Designed especially for DVD movie distribution, our HDR~MPEG compression method features low storage requirements for HDR content resulting in a 30\% size increase to an LDR video sequence. The proposed compression method does not impose restrictions or modify the appearance of the LDR or HDR video. This is important for backward compatibility of the LDR stream with current DVD appearance, and also enables independent fine tuning, tone mapping, and color grading of both streams.}, BOOKTITLE = {Proceedings of ACM SIGGRAPH 2006}, PAGES = {713--723}, SERIES = {ACM Transactions on Graphics}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Rafa&#322; %A Efremov, Alexander %A Myszkowski, Karol %A Seidel, Hans-Peter %E Dorsey, Julie %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Backward Compatible High Dynamic Range MPEG Video Compression : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-223A-9 %F EDOC: 314605 %F OTHER: Local-ID: C125675300671F7B-1B2B94EF48903F44C1257149002EEC16-Mantiuk2006:hdrmpeg %I ACM %D 2006 %B Untitled Event %Z date of event: 2006-07-31 - %C Boston, MA, USA %X To embrace the imminent transition from traditional low-contrast video (LDR) content to superior high dynamic range (HDR) content, we propose a novel backward compatible HDR video compression (HDR~MPEG) method. We introduce a compact reconstruction function that is used to decompose an HDR video stream into a residual stream and a standard LDR stream, which can be played on existing MPEG decoders, such as DVD players. The reconstruction function is finely tuned to the content of each HDR frame to achieve strong decorrelation between the LDR and residual streams, which minimizes the amount of redundant information. The size of the residual stream is further reduced by removing invisible details prior to compression using our HDR-enabled filter, which models luminance adaptation, contrast sensitivity, and visual masking based on the HDR content. Designed especially for DVD movie distribution, our HDR~MPEG compression method features low storage requirements for HDR content resulting in a 30\% size increase to an LDR video sequence. The proposed compression method does not impose restrictions or modify the appearance of the LDR or HDR video. This is important for backward compatibility of the LDR stream with current DVD appearance, and also enables independent fine tuning, tone mapping, and color grading of both streams. %B Proceedings of ACM SIGGRAPH 2006 %P 713 - 723 %I ACM %B ACM Transactions on Graphics %@ false
Smith, K., Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2006. Beyond Tone Mapping: Enhanced Depiction of Tone Mapped HDR Images. The European Association for Computer Graphics 27th Annual Conference : EUROGRAPHICS 2006, Blackwell.
Abstract
High Dynamic Range (HDR) images capture the full range of luminance present in real world scenes, and unlike Low Dynamic Range (LDR) images, can simultaneously contain detailed information in the deepest of shadows and the brightest of light sources. For display or aesthetic purposes, it is often necessary to perform tone mapping, which creates LDR depictions of HDR images at the cost of contrast information loss. The purpose of this work is two-fold: to analyze a displayed LDR image against its original HDR counterpart in terms of perceived contrast distortion, and to enhance the LDR depiction with perceptually driven colour adjustments to restore the original HDR contrast information. For analysis, we present a novel algorithm for the characterization of tone mapping distortion in terms of observed loss of global contrast, and loss of contour and texture details. We classify existing tone mapping operators accordingly. We measure both distortions with perceptual metrics that enable the automatic and meaningful enhancement of LDR depictions. For image enhancement, we identify artistic and photographic colour techniques from which we derive adjustments that create contrast with colour. The enhanced LDR image is an improved depiction of the original HDR image with restored contrast information.
Export
BibTeX
@inproceedings{Smith2006eg, TITLE = {Beyond Tone Mapping: Enhanced Depiction of Tone Mapped {HDR} Images}, AUTHOR = {Smith, Kaleigh and Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Szirmay-Kalos, L{\'a}szl{\'o} and Gr{\"o}ller, Eduard}, LANGUAGE = {eng}, ISBN = {ISSN: 0167-7055}, LOCALID = {Local-ID: C125675300671F7B-8B783A77FDD3AB10C125722F003AF5B2-Smith2006eg}, PUBLISHER = {Blackwell}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {High Dynamic Range (HDR) images capture the full range of luminance present in real world scenes, and unlike Low Dynamic Range (LDR) images, can simultaneously contain detailed information in the deepest of shadows and the brightest of light sources. For display or aesthetic purposes, it is often necessary to perform tone mapping, which creates LDR depictions of HDR images at the cost of contrast information loss. The purpose of this work is two-fold: to analyze a displayed LDR image against its original HDR counterpart in terms of perceived contrast distortion, and to enhance the LDR depiction with perceptually driven colour adjustments to restore the original HDR contrast information. For analysis, we present a novel algorithm for the characterization of tone mapping distortion in terms of observed loss of global contrast, and loss of contour and texture details. We classify existing tone mapping operators accordingly. We measure both distortions with perceptual metrics that enable the automatic and meaningful enhancement of LDR depictions. For image enhancement, we identify artistic and photographic colour techniques from which we derive adjustments that create contrast with colour. The enhanced LDR image is an improved depiction of the original HDR image with restored contrast information.}, BOOKTITLE = {The European Association for Computer Graphics 27th Annual Conference : EUROGRAPHICS 2006}, PAGES = {427--438}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Smith, Kaleigh %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %E Szirmay-Kalos, L&#225;szl&#243; %E Gr&#246;ller, Eduard %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Beyond Tone Mapping: Enhanced Depiction of Tone Mapped HDR Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-223F-0 %F EDOC: 314503 %F OTHER: Local-ID: C125675300671F7B-8B783A77FDD3AB10C125722F003AF5B2-Smith2006eg %I Blackwell %D 2006 %B Untitled Event %Z date of event: 2006-09-06 - %C Vienna, Austria %X High Dynamic Range (HDR) images capture the full range of luminance present in real world scenes, and unlike Low Dynamic Range (LDR) images, can simultaneously contain detailed information in the deepest of shadows and the brightest of light sources. For display or aesthetic purposes, it is often necessary to perform tone mapping, which creates LDR depictions of HDR images at the cost of contrast information loss. The purpose of this work is two-fold: to analyze a displayed LDR image against its original HDR counterpart in terms of perceived contrast distortion, and to enhance the LDR depiction with perceptually driven colour adjustments to restore the original HDR contrast information. For analysis, we present a novel algorithm for the characterization of tone mapping distortion in terms of observed loss of global contrast, and loss of contour and texture details. We classify existing tone mapping operators accordingly. We measure both distortions with perceptual metrics that enable the automatic and meaningful enhancement of LDR depictions. For image enhancement, we identify artistic and photographic colour techniques from which we derive adjustments that create contrast with colour. The enhanced LDR image is an improved depiction of the original HDR image with restored contrast information. %B The European Association for Computer Graphics 27th Annual Conference : EUROGRAPHICS 2006 %P 427 - 438 %I Blackwell %@ ISSN: 0167-7055 %B Computer Graphics Forum
Yoshida, A., Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2006a. Analysis of Reproducing Real-World Appearance on Displays of Varying Dynamic Range. EUROGRAPHICS 2006 (EG’06), Blackwell.
Abstract
We conduct a series of experiments to investigate the desired properties of a tone mapping operator (TMO) and to design such an operator based on subjective data. We propose a novel approach to the tone mapping problem, in which the tone mapping is determined by the data from subjective experiments, rather than an image processing algorithm or a visual model. To collect such data, a series of experiments are conducted in which the subjects adjust three generic TMO parameters: brightness, contrast and color saturation. In two experiments, the subjects are to find a) the most preferred image without a reference image and b) the closest image to the real-world scene which the subjects are confronted with. The purpose of these experiments is to collect data for two rendering goals of a TMO: rendering the most preferred image and preserving the fidelity with the real world scene. The data provide an assessment for the most intuitive control over the tone mapping parameters. Unlike most of the researched TMOs that focus on rendering for standard low dynamic range monitors, we consider a broad range of potential displays, each offering different dynamic range and brightness. We simulate capabilities of such displays on a high dynamic range (HDR) monitor. This lets us address the question of whether tone mapping is needed for HDR displays.
Export
BibTeX
@inproceedings{Yoshida_EG2006z, TITLE = {Analysis of Reproducing Real-World Appearance on Displays of Varying Dynamic Range}, AUTHOR = {Yoshida, Akiko and Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-36B5343ECEA5A706C125730D00546611-Yoshida_EG2006z}, PUBLISHER = {Blackwell}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We conduct a series of experiments to investigate the desired properties of a tone mapping operator (TMO) and to design such an operator based on subjective data. We propose a novel approach to the tone mapping problem, in which the tone mapping is determined by the data from subjective experiments, rather than an image processing algorithm or a visual model. To collect such data, a series of experiments are conducted in which the subjects adjust three generic TMO parameters: brightness, contrast and color saturation. In two experiments, the subjects are to find a) the most preferred image without a reference image and b) the closest image to the real-world scene which the subjects are confronted with. The purpose of these experiments is to collect data for two rendering goals of a TMO: rendering the most preferred image and preserving the fidelity with the real world scene. The data provide an assessment for the most intuitive control over the tone mapping parameters. Unlike most of the researched TMOs that focus on rendering for standard low dynamic range monitors, we consider a broad range of potential displays, each offering different dynamic range and brightness. We simulate capabilities of such displays on a high dynamic range (HDR) monitor. This lets us address the question of whether tone mapping is needed for HDR displays.}, BOOKTITLE = {EUROGRAPHICS 2006 (EG'06)}, EDITOR = {Gr{\"o}ller, Eduard and Szirmay-Kalos, L{\'a}szl{\'o}}, PAGES = {415--426}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Yoshida, Akiko %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Analysis of Reproducing Real-World Appearance on Displays of Varying Dynamic Range : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2481-A %F EDOC: 356548 %F OTHER: Local-ID: C12573CC004A8E26-36B5343ECEA5A706C125730D00546611-Yoshida_EG2006z %D 2006 %B Untitled Event %Z date of event: 2006-09-04 - 2006-09-08 %C Vienna, Austria %X We conduct a series of experiments to investigate the desired properties of a tone mapping operator (TMO) and to design such an operator based on subjective data. We propose a novel approach to the tone mapping problem, in which the tone mapping is determined by the data from subjective experiments, rather than an image processing algorithm or a visual model. To collect such data, a series of experiments are conducted in which the subjects adjust three generic TMO parameters: brightness, contrast and color saturation. In two experiments, the subjects are to find a) the most preferred image without a reference image and b) the closest image to the real-world scene which the subjects are confronted with. The purpose of these experiments is to collect data for two rendering goals of a TMO: rendering the most preferred image and preserving the fidelity with the real world scene. The data provide an assessment for the most intuitive control over the tone mapping parameters. Unlike most of the researched TMOs that focus on rendering for standard low dynamic range monitors, we consider a broad range of potential displays, each offering different dynamic range and brightness. We simulate capabilities of such displays on a high dynamic range (HDR) monitor. This lets us address the question of whether tone mapping is needed for HDR displays. %B EUROGRAPHICS 2006 (EG'06) %E Gr&#246;ller, Eduard; Szirmay-Kalos, L&#225;szl&#243; %P 415 - 426 %I Blackwell %B Computer Graphics Forum
Yoshida, A., Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2006b. Analysis of Reproducing Real-World Appearance on Displays of Varying Dynamic Range. Computer Graphics Forum 25.
Abstract
We conduct a series of experiments to investigate the desired properties of a tone mapping operator (TMO) and to design such an operator based on subjective data. We propose a novel approach to the tone mapping problem, in which the tone mapping parameters are determined based on the data from subjective experiments, rather than an image processing algorithm or a visual model. To collect this data, a series of experiments are conducted in which the subjects adjust three generic TMO parameters: brightness, contrast and color saturation. In two experiments, the subjects are to find a) the most preferred image without a reference image (preference task) and b) the closest image to the real-world scene which the subjects are confronted with (fidelity task). We analyze subjects' choice of parameters to provide more intuitive control over the parameters of a tone mapping operator. Unlike most of the researched TMOs that focus on rendering for standard low dynamic range monitors, we consider a broad range of potential displays, each offering different dynamic range and brightness. We simulate capabilities of such displays on a high dynamic range (HDR) display. This allows us to address the question of how tone mapping needs to be adjusted to accommodate displays with drastically different dynamic ranges.
Export
BibTeX
@article{Yoshida_EG2006, TITLE = {Analysis of Reproducing Real-World Appearance on Displays of Varying Dynamic Range}, AUTHOR = {Yoshida, Akiko and Mantiuk, Rafa{\l} and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, LOCALID = {Local-ID: C125675300671F7B-7F4559DA54638A8CC125722200392B07-Yoshida_EG2006}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We conduct a series of experiments to investigate the desired properties of a tone mapping operator (TMO) and to design such an operator based on subjective data. We propose a novel approach to the tone mapping problem, in which the tone mapping parameters are determined based on the data from subjective experiments, rather than an image processing algorithm or a visual model. To collect this data, a series of experiments are conducted in which the subjects adjust three generic TMO parameters: brightness, contrast and color saturation. In two experiments, the subjects are to find a) the most preferred image without a reference image (preference task) and b) the closest image to the real-world scene which the subjects are confronted with (fidelity task). We analyze subjects' choice of parameters to provide more intuitive control over the parameters of a tone mapping operator. Unlike most of the researched TMOs that focus on rendering for standard low dynamic range monitors, we consider a broad range of potential displays, each offering different dynamic range and brightness. We simulate capabilities of such displays on a high dynamic range (HDR) display. This allows us to address the question of how tone mapping needs to be adjusted to accommodate displays with drastically different dynamic ranges.}, JOURNAL = {Computer Graphics Forum}, VOLUME = {25}, PAGES = {415--426}, }
Endnote
%0 Journal Article %A Yoshida, Akiko %A Mantiuk, Rafa&#322; %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Analysis of Reproducing Real-World Appearance on Displays of Varying Dynamic Range : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-21F6-F %F EDOC: 314462 %F OTHER: Local-ID: C125675300671F7B-7F4559DA54638A8CC125722200392B07-Yoshida_EG2006 %D 2006 %* Review method: peer-reviewed %X We conduct a series of experiments to investigate the desired properties of a tone mapping operator (TMO) and to design such an operator based on subjective data. We propose a novel approach to the tone mapping problem, in which the tone mapping parameters are determined based on the data from subjective experiments, rather than an image processing algorithm or a visual model. To collect this data, a series of experiments are conducted in which the subjects adjust three generic TMO parameters: brightness, contrast and color saturation. In two experiments, the subjects are to find a) the most preferred image without a reference image (preference task) and b) the closest image to the real-world scene which the subjects are confronted with (fidelity task). We analyze subjects' choice of parameters to provide more intuitive control over the parameters of a tone mapping operator. Unlike most of the researched TMOs that focus on rendering for standard low dynamic range monitors, we consider a broad range of potential displays, each offering different dynamic range and brightness. We simulate capabilities of such displays on a high dynamic range (HDR) display. This allows us to address the question of how tone mapping needs to be adjusted to accommodate displays with drastically different dynamic ranges. %J Computer Graphics Forum %V 25 %& 415 %P 415 - 426 %@ false
2005
Havran, V., Smyk, M., Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2005. Interactive System for Dynamic Scene Lighting using Captured Video Environment Maps. Rendering Techniques 2005: Eurographics Symposium on Rendering, Eurographics Association.
Abstract
We present an interactive system for fully dynamic scene lighting using captured high dynamic range (HDR) video environment maps. The key component of our system is an algorithm for efficient decomposition of HDR video environment map captured over hemisphere into a set of representative directional light sources, which can be used for the direct lighting computation with shadows using graphics hardware. The resulting lights exhibit good temporal coherence and their number can be adaptively changed to keep a constant framerate while good spatial distribution (stratification) properties are maintained. We can handle a large number of light sources with shadows using a novel technique which reduces the cost of BRDF-based shading and visibility computations. We demonstrate the use of our system in a mixed reality application in which real and synthetic objects are illuminated by consistent lighting at interactive framerates.
Export
BibTeX
@inproceedings{Havran2005egsrEM, TITLE = {Interactive System for Dynamic Scene Lighting using Captured Video Environment Maps}, AUTHOR = {Havran, Vlastimil and Smyk, Miloslaw and Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Deussen, Oliver and Keller, Alexander and Bala, Kavita and Dutr{\'e}, Philip and Fellner, Dieter W. and Spencer, Stephen N.}, LANGUAGE = {eng}, ISBN = {3-905673-23-1}, LOCALID = {Local-ID: C125675300671F7B-C3468DABE0F8D837C12570B30047ED74-Havran2005egsrEM}, PUBLISHER = {Eurographics Association}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {We present an interactive system for fully dynamic scene lighting using captured high dynamic range (HDR) video environment maps. The key component of our system is an algorithm for efficient decomposition of HDR video environment map captured over hemisphere into a set of representative directional light sources, which can be used for the direct lighting computation with shadows using graphics hardware. The resulting lights exhibit good temporal coherence and their number can be adaptively changed to keep a constant framerate while good spatial distribution (stratification) properties are maintained. We can handle a large number of light sources with shadows using a novel technique which reduces the cost of BRDF-based shading and visibility computations. We demonstrate the use of our system in a mixed reality application in which real and synthetic objects are illuminated by consistent lighting at interactive framerates.}, BOOKTITLE = {Rendering Techniques 2005: Eurographics Symposium on Rendering}, PAGES = {31--42,311}, }
Endnote
%0 Conference Proceedings %A Havran, Vlastimil %A Smyk, Miloslaw %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %E Deussen, Oliver %E Keller, Alexander %E Bala, Kavita %E Dutr&#233;, Philip %E Fellner, Dieter W. %E Spencer, Stephen N. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive System for Dynamic Scene Lighting using Captured Video Environment Maps : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-26DB-0 %F EDOC: 279016 %F OTHER: Local-ID: C125675300671F7B-C3468DABE0F8D837C12570B30047ED74-Havran2005egsrEM %I Eurographics Association %D 2005 %B Untitled Event %Z date of event: 2005-06-29 - %C Konstanz, Germany %X We present an interactive system for fully dynamic scene lighting using captured high dynamic range (HDR) video environment maps. The key component of our system is an algorithm for efficient decomposition of HDR video environment map captured over hemisphere into a set of representative directional light sources, which can be used for the direct lighting computation with shadows using graphics hardware. The resulting lights exhibit good temporal coherence and their number can be adaptively changed to keep a constant framerate while good spatial distribution (stratification) properties are maintained. We can handle a large number of light sources with shadows using a novel technique which reduces the cost of BRDF-based shading and visibility computations. We demonstrate the use of our system in a mixed reality application in which real and synthetic objects are illuminated by consistent lighting at interactive framerates. %B Rendering Techniques 2005: Eurographics Symposium on Rendering %P 31 - 42,311 %I Eurographics Association %@ 3-905673-23-1
Jiménez, J.-R., Myszkowski, K., and Pueyo, X. 2005. Interactive Global Illumination in Dynamic Participating Media Using Selective Photon Tracing. SCCG ’05: Proceedings of the 21st spring conference on Computer graphics, ACM.
Abstract
Tremendous progress in the development and accessibility of high dynamic range (HDR) technology that has happened just recently results in fast proliferation of HDR synthetic image sequences and captured HDR video. When properly processed, such HDR data can lead to very convincing and realistic results even when presented on traditional low dynamic range (LDR) display devices. This requires real-time local contrast compression (tone mapping) with simultaneous modeling of important in HDR image perception effects such as visual acuity, glare, day and night vision. We propose a unified model to include all those effects into a common computational framework, which enables an efficient implementation on currently available graphics hardware. We develop a post processing module which can be added as the final stage of any real-time rendering system, game engine, or digital video player, which enhances the realism and believability of displayed image streams.
Export
BibTeX
@inproceedings{Jimenez05, TITLE = {Interactive Global Illumination in Dynamic Participating Media Using Selective Photon Tracing}, AUTHOR = {Jim{\'e}nez, Juan-Roberto and Myszkowski, Karol and Pueyo, Xavier}, LANGUAGE = {eng}, ISBN = {1-59593-203-6}, LOCALID = {Local-ID: C125675300671F7B-F70D9F523B0C4008C1256FE9004C51A5-Jimenez05}, PUBLISHER = {ACM}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Tremendous progress in the development and accessibility of high dynamic range (HDR) technology that has happened just recently results in fast proliferation of HDR synthetic image sequences and captured HDR video. When properly processed, such HDR data can lead to very convincing and realistic results even when presented on traditional low dynamic range (LDR) display devices. This requires real-time local contrast compression (tone mapping) with simultaneous modeling of important in HDR image perception effects such as visual acuity, glare, day and night vision. We propose a unified model to include all those effects into a common computational framework, which enables an efficient implementation on currently available graphics hardware. We develop a post processing module which can be added as the final stage of any real-time rendering system, game engine, or digital video player, which enhances the realism and believability of displayed image streams.}, BOOKTITLE = {SCCG '05: Proceedings of the 21st spring conference on Computer graphics}, PAGES = {211--218}, }
Endnote
%0 Conference Proceedings %A Jim&#233;nez, Juan-Roberto %A Myszkowski, Karol %A Pueyo, Xavier %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive Global Illumination in Dynamic Participating Media Using Selective Photon Tracing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-26D9-3 %F EDOC: 279010 %F OTHER: Local-ID: C125675300671F7B-F70D9F523B0C4008C1256FE9004C51A5-Jimenez05 %I ACM %D 2005 %B Untitled Event %Z date of event: 2005-05-12 - %C Budmerice, Slovakia %X Tremendous progress in the development and accessibility of high dynamic range (HDR) technology that has happened just recently results in fast proliferation of HDR synthetic image sequences and captured HDR video. When properly processed, such HDR data can lead to very convincing and realistic results even when presented on traditional low dynamic range (LDR) display devices. This requires real-time local contrast compression (tone mapping) with simultaneous modeling of important in HDR image perception effects such as visual acuity, glare, day and night vision. We propose a unified model to include all those effects into a common computational framework, which enables an efficient implementation on currently available graphics hardware. We develop a post processing module which can be added as the final stage of any real-time rendering system, game engine, or digital video player, which enhances the realism and believability of displayed image streams. %B SCCG '05: Proceedings of the 21st spring conference on Computer graphics %P 211 - 218 %I ACM %@ 1-59593-203-6
Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2005a. Lightness Perception in Tone Reproduction for High Dynamic Range Images. The European Association for Computer Graphics 26th Annual Conference : EUROGRAPHICS 2005, Blackwell.
Export
BibTeX
@inproceedings{Krawczyk05EG, TITLE = {Lightness Perception in Tone Reproduction for High Dynamic Range Images}, AUTHOR = {Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Alexa, Marc and Marks, Joe}, LANGUAGE = {eng}, ISSN = {0167-7055}, LOCALID = {Local-ID: C125675300671F7B-D7B5D281DAAB9EB0C1256FE90049E357-Krawczyk05EG}, PUBLISHER = {Blackwell}, YEAR = {2005}, DATE = {2005}, BOOKTITLE = {The European Association for Computer Graphics 26th Annual Conference : EUROGRAPHICS 2005}, PAGES = {635--645}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %E Alexa, Marc %E Marks, Joe %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Lightness Perception in Tone Reproduction for High Dynamic Range Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-26F6-3 %F EDOC: 279009 %F OTHER: Local-ID: C125675300671F7B-D7B5D281DAAB9EB0C1256FE90049E357-Krawczyk05EG %I Blackwell %D 2005 %B Untitled Event %Z date of event: 2005-08-29 - %C Dublin, Ireland %B The European Association for Computer Graphics 26th Annual Conference : EUROGRAPHICS 2005 %P 635 - 645 %I Blackwell %B Computer Graphics Forum %@ false
Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2005b. Perceptual Effects in Real-Time Tone Mapping. SCCG ’05: Proceedings of the 21st spring conference on Computer graphics, ACM.
Abstract
Tremendous progress in the development and accessibility of high dynamic range (HDR) technology that has happened just recently results in fast proliferation of HDR synthetic image sequences and captured HDR video. When properly processed, such HDR data can lead to very convincing and realistic results even when presented on traditional low dynamic range (LDR) display devices. This requires real-time local contrast compression (tone mapping) with simultaneous modeling of important in HDR image perception effects such as visual acuity, glare, day and night vision. We propose a unified model to include all those effects into a common computational framework, which enables an efficient implementation on currently available graphics hardware. We develop a post processing module which can be added as the final stage of any real-time rendering system, game engine, or digital video player, which enhances the realism and believability of displayed image streams.
Export
BibTeX
@inproceedings{Krawczyk2005sccg, TITLE = {Perceptual Effects in Real-Time Tone Mapping}, AUTHOR = {Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-A48310C4FDBE1EA6C1256FE9004D4776-Krawczyk2005sccg}, PUBLISHER = {ACM}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Tremendous progress in the development and accessibility of high dynamic range (HDR) technology that has happened just recently results in fast proliferation of HDR synthetic image sequences and captured HDR video. When properly processed, such HDR data can lead to very convincing and realistic results even when presented on traditional low dynamic range (LDR) display devices. This requires real-time local contrast compression (tone mapping) with simultaneous modeling of important in HDR image perception effects such as visual acuity, glare, day and night vision. We propose a unified model to include all those effects into a common computational framework, which enables an efficient implementation on currently available graphics hardware. We develop a post processing module which can be added as the final stage of any real-time rendering system, game engine, or digital video player, which enhances the realism and believability of displayed image streams.}, BOOKTITLE = {SCCG '05: Proceedings of the 21st spring conference on Computer graphics}, PAGES = {195--202}, }
Endnote
%0 Conference Proceedings %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Effects in Real-Time Tone Mapping : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2757-0 %F EDOC: 279038 %F OTHER: Local-ID: C125675300671F7B-A48310C4FDBE1EA6C1256FE9004D4776-Krawczyk2005sccg %I ACM %D 2005 %B Untitled Event %Z date of event: 2005-05-12 - %C Budmerice, Slovakia %X Tremendous progress in the development and accessibility of high dynamic range (HDR) technology that has happened just recently results in fast proliferation of HDR synthetic image sequences and captured HDR video. When properly processed, such HDR data can lead to very convincing and realistic results even when presented on traditional low dynamic range (LDR) display devices. This requires real-time local contrast compression (tone mapping) with simultaneous modeling of important in HDR image perception effects such as visual acuity, glare, day and night vision. We propose a unified model to include all those effects into a common computational framework, which enables an efficient implementation on currently available graphics hardware. We develop a post processing module which can be added as the final stage of any real-time rendering system, game engine, or digital video player, which enhances the realism and believability of displayed image streams. %B SCCG '05: Proceedings of the 21st spring conference on Computer graphics %P 195 - 202 %I ACM
Mantiuk, R., Daly, S., Myszkowski, K., and Seidel, H.-P. 2005a. Predicting Visible Differences in High Dynamic Range Images - Model and its Calibration. Human Vision and Electronic Imaging X, IS&T/SPIE’s 17th Annual Symposium on Electronic Imaging (2005), SPIE.
Abstract
New imaging and rendering systems commonly use physically accurate lighting information in the form of high-dynamic range (HDR) images and video. HDR images contain actual colorimetric or physical values, which can span 14 orders of magnitude, instead of 8-bit renderings, found in standard images. The additional precision and quality retained in HDR visual data is necessary to display images on advanced HDR display devices, capable of showing contrast of 50,000:1, as compared to the contrast of 700:1 for LCD displays. With the development of high-dynamic range visual techniques comes a need for an automatic visual quality assessment of the resulting images. In this paper we propose several modifications to the Visual Difference Predicator (VDP). The modifications improve the prediction of perceivable differences in the full visible range of luminance and under the adaptation conditions corresponding to real scene observation. The proposed metric takes into account the aspects of high contrast vision, like scattering of the light in the optics (OTF), nonlinear response to light for the full range of luminance, and local adaptation. To calibrate our HDR~VDP we perform experiments using an advanced HDR display, capable of displaying the range of luminance that is close to that found in real scenes.
Export
BibTeX
@inproceedings{Mantiuk2005, TITLE = {Predicting Visible Differences in High Dynamic Range Images -- Model and its Calibration}, AUTHOR = {Mantiuk, Rafal and Daly, Scott and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and Daly, Scott J.}, LANGUAGE = {eng}, ISSN = {0277-786X}, LOCALID = {Local-ID: C125675300671F7B-7A33923425AEBF68C1256F800037FB11-Mantiuk2005}, PUBLISHER = {SPIE}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {New imaging and rendering systems commonly use physically accurate lighting information in the form of high-dynamic range (HDR) images and video. HDR images contain actual colorimetric or physical values, which can span 14 orders of magnitude, instead of 8-bit renderings, found in standard images. The additional precision and quality retained in HDR visual data is necessary to display images on advanced HDR display devices, capable of showing contrast of 50,000:1, as compared to the contrast of 700:1 for LCD displays. With the development of high-dynamic range visual techniques comes a need for an automatic visual quality assessment of the resulting images. In this paper we propose several modifications to the Visual Difference Predicator (VDP). The modifications improve the prediction of perceivable differences in the full visible range of luminance and under the adaptation conditions corresponding to real scene observation. The proposed metric takes into account the aspects of high contrast vision, like scattering of the light in the optics (OTF), nonlinear response to light for the full range of luminance, and local adaptation. To calibrate our HDR~VDP we perform experiments using an advanced HDR display, capable of displaying the range of luminance that is close to that found in real scenes.}, BOOKTITLE = {Human Vision and Electronic Imaging X, IS\&T/SPIE's 17th Annual Symposium on Electronic Imaging (2005)}, PAGES = {204--214}, SERIES = {SPIE Proceedings Series}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Rafal %A Daly, Scott %A Myszkowski, Karol %A Seidel, Hans-Peter %E Rogowitz, Bernice E. %E Pappas, Thrasyvoulos N. %E Daly, Scott J. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Predicting Visible Differences in High Dynamic Range Images - Model and its Calibration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2773-0 %F EDOC: 278999 %F OTHER: Local-ID: C125675300671F7B-7A33923425AEBF68C1256F800037FB11-Mantiuk2005 %I SPIE %D 2005 %B Untitled Event %Z date of event: 2005-01-16 - %C San Jose, California USA %X New imaging and rendering systems commonly use physically accurate lighting information in the form of high-dynamic range (HDR) images and video. HDR images contain actual colorimetric or physical values, which can span 14 orders of magnitude, instead of 8-bit renderings, found in standard images. The additional precision and quality retained in HDR visual data is necessary to display images on advanced HDR display devices, capable of showing contrast of 50,000:1, as compared to the contrast of 700:1 for LCD displays. With the development of high-dynamic range visual techniques comes a need for an automatic visual quality assessment of the resulting images. In this paper we propose several modifications to the Visual Difference Predicator (VDP). The modifications improve the prediction of perceivable differences in the full visible range of luminance and under the adaptation conditions corresponding to real scene observation. The proposed metric takes into account the aspects of high contrast vision, like scattering of the light in the optics (OTF), nonlinear response to light for the full range of luminance, and local adaptation. To calibrate our HDR~VDP we perform experiments using an advanced HDR display, capable of displaying the range of luminance that is close to that found in real scenes. %B Human Vision and Electronic Imaging X, IS&T/SPIE's 17th Annual Symposium on Electronic Imaging (2005) %P 204 - 214 %I SPIE %B SPIE Proceedings Series %@ false
Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2005b. A perceptual framework for contrast processing of high dynamic range images. APGV ’05: Proceedings of the 2nd symposium on Appied perception in graphics and visualization, ACM.
Abstract
In this work we propose a framework for image processing in a visual response space, in which contrast values directly correlate with their visibility in an image. Our framework involves a transformation of an image from luminance space to a pyramid of low-pass contrast images and then to the visual response space. After modifying response values, the transformation can be reversed to produce the resulting image. To predict the visibility of suprathreshold contrast, we derive a transducer function for the full range of contrast levels that can be found in High Dynamic Range images. We show that a complex contrast compression operation, which preserves textures of small contrast, is reduced to a linear scaling in the proposed visual response space.
Export
BibTeX
@inproceedings{mantiuk2004::contrast, TITLE = {A perceptual framework for contrast processing of high dynamic range images}, AUTHOR = {Mantiuk, Rafal and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Malik, Jitendra and Koenderink, Jan J.}, LANGUAGE = {eng}, ISBN = {1-59593-139-2}, LOCALID = {Local-ID: C125675300671F7B-C07FBDA152C52871C12570700034B914-mantiuk2004::contrast}, PUBLISHER = {ACM}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {In this work we propose a framework for image processing in a visual response space, in which contrast values directly correlate with their visibility in an image. Our framework involves a transformation of an image from luminance space to a pyramid of low-pass contrast images and then to the visual response space. After modifying response values, the transformation can be reversed to produce the resulting image. To predict the visibility of suprathreshold contrast, we derive a transducer function for the full range of contrast levels that can be found in High Dynamic Range images. We show that a complex contrast compression operation, which preserves textures of small contrast, is reduced to a linear scaling in the proposed visual response space.}, BOOKTITLE = {APGV '05: Proceedings of the 2nd symposium on Appied perception in graphics and visualization}, PAGES = {87--94}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Rafal %A Myszkowski, Karol %A Seidel, Hans-Peter %E Malik, Jitendra %E Koenderink, Jan J. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T A perceptual framework for contrast processing of high dynamic range images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-25BB-F %F EDOC: 278998 %F OTHER: Local-ID: C125675300671F7B-C07FBDA152C52871C12570700034B914-mantiuk2004::contrast %I ACM %D 2005 %B Untitled Event %Z date of event: 2005-08-26 - %C Coruna, Spain %X In this work we propose a framework for image processing in a visual response space, in which contrast values directly correlate with their visibility in an image. Our framework involves a transformation of an image from luminance space to a pyramid of low-pass contrast images and then to the visual response space. After modifying response values, the transformation can be reversed to produce the resulting image. To predict the visibility of suprathreshold contrast, we derive a transducer function for the full range of contrast levels that can be found in High Dynamic Range images. We show that a complex contrast compression operation, which preserves textures of small contrast, is reduced to a linear scaling in the proposed visual response space. %B APGV '05: Proceedings of the 2nd symposium on Appied perception in graphics and visualization %P 87 - 94 %I ACM %@ 1-59593-139-2
Smyk, M., Kinuwaki, S., Durikovic, R., and Myszkowski, K. 2005. Temporally Coherent Irradiance Caching for High Quality Animation Rendering. The European Association for Computer Graphics 26th Annual Conference : EUROGRAPHICS 2005, Blackwell.
Export
BibTeX
@inproceedings{Smyk05EG, TITLE = {Temporally Coherent Irradiance Caching for High Quality Animation Rendering}, AUTHOR = {Smyk, Miloslaw and Kinuwaki, Shin-ichi and Durikovic, Roman and Myszkowski, Karol}, EDITOR = {Alexa, Marc and Marks, Joe}, LANGUAGE = {eng}, ISSN = {0167-7055}, LOCALID = {Local-ID: C125675300671F7B-9292A579A1D2DBAEC1256FE9004A40A0-Smyk05EG}, PUBLISHER = {Blackwell}, YEAR = {2005}, DATE = {2005}, BOOKTITLE = {The European Association for Computer Graphics 26th Annual Conference : EUROGRAPHICS 2005}, PAGES = {401--412}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Smyk, Miloslaw %A Kinuwaki, Shin-ichi %A Durikovic, Roman %A Myszkowski, Karol %E Alexa, Marc %E Marks, Joe %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Temporally Coherent Irradiance Caching for High Quality Animation Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-27D9-E %F EDOC: 278974 %F OTHER: Local-ID: C125675300671F7B-9292A579A1D2DBAEC1256FE9004A40A0-Smyk05EG %I Blackwell %D 2005 %B Untitled Event %Z date of event: 2005-08-29 - %C Dublin, Ireland %B The European Association for Computer Graphics 26th Annual Conference : EUROGRAPHICS 2005 %P 401 - 412 %I Blackwell %B Computer Graphics Forum %@ false
Yoshida, A., Blanz, V., Myszkowski, K., and Seidel, H.-P. 2005. Perceptual Evaluation of Tone Mapping Operators with Real-World Sceness. Human Vision and Electronic Imaging X, IS&T/SPIE’s 17th Annual Symposium on Electronic Imaging (2005), SPIE.
Abstract
A number of successful tone mapping operators for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range devices. They were inspired by Øelds as diverse as image processing, photographic practice, and modeling of the human visual systems (HVS). The variety of approaches calls for a systematic perceptual evaluation of their performance. We conduct a psychophysical experiment based on a direct comparison between the appearance of real-world scenes and HDR images of these scenes displayed on a low dynamic range monitor. In our experiment, HDR images are tone mapped by seven existing tone mapping operators. The primary interest of this psychophysical experiment is to assess the diÆerences in how tone mapped images are perceived by human observers and to Ønd out which attributes of image appearance account for these diÆerences when tone mapped images are compared directly with their corresponding real-world scenes rather than with each other. The human subjects rate image naturalness, overall contrast, overall brightness, and detail reproduction in dark and bright image regions with respect to the corresponding real-world scene. The results indicate substantial diÆerences in perception of images produced by individual tone mapping operators. We observe a clear distinction between global and local operators in favor of the latter, and we classify the tone mapping operators according to naturalness and appearance attributes.ce attributes.
Export
BibTeX
@inproceedings{Yoshida2005, TITLE = {Perceptual Evaluation of Tone Mapping Operators with Real-World Sceness}, AUTHOR = {Yoshida, Akiko and Blanz, Volker and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Rogowitz, Bernice E. and Pappas, Thrasyvoulos N. and Daly, Scott J.}, LANGUAGE = {eng}, ISSN = {0277-786X}, LOCALID = {Local-ID: C125675300671F7B-6BD5753531007D22C1256F5C006B5D8C-Yoshida2005}, PUBLISHER = {SPIE}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {A number of successful tone mapping operators for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range devices. They were inspired by {\O}elds as diverse as image processing, photographic practice, and modeling of the human visual systems (HVS). The variety of approaches calls for a systematic perceptual evaluation of their performance. We conduct a psychophysical experiment based on a direct comparison between the appearance of real-world scenes and HDR images of these scenes displayed on a low dynamic range monitor. In our experiment, HDR images are tone mapped by seven existing tone mapping operators. The primary interest of this psychophysical experiment is to assess the di{\AE}erences in how tone mapped images are perceived by human observers and to {\O}nd out which attributes of image appearance account for these di{\AE}erences when tone mapped images are compared directly with their corresponding real-world scenes rather than with each other. The human subjects rate image naturalness, overall contrast, overall brightness, and detail reproduction in dark and bright image regions with respect to the corresponding real-world scene. The results indicate substantial di{\AE}erences in perception of images produced by individual tone mapping operators. We observe a clear distinction between global and local operators in favor of the latter, and we classify the tone mapping operators according to naturalness and appearance attributes.ce attributes.}, BOOKTITLE = {Human Vision and Electronic Imaging X, IS\&T/SPIE's 17th Annual Symposium on Electronic Imaging (2005)}, PAGES = {192--203}, SERIES = {SPIE Proceedings Series}, }
Endnote
%0 Conference Proceedings %A Yoshida, Akiko %A Blanz, Volker %A Myszkowski, Karol %A Seidel, Hans-Peter %E Rogowitz, Bernice E. %E Pappas, Thrasyvoulos N. %E Daly, Scott J. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Evaluation of Tone Mapping Operators with Real-World Sceness : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2759-C %F EDOC: 278958 %F OTHER: Local-ID: C125675300671F7B-6BD5753531007D22C1256F5C006B5D8C-Yoshida2005 %I SPIE %D 2005 %B Untitled Event %Z date of event: 2005-01-16 - %C San Jose, USA %X A number of successful tone mapping operators for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range devices. They were inspired by &#216;elds as diverse as image processing, photographic practice, and modeling of the human visual systems (HVS). The variety of approaches calls for a systematic perceptual evaluation of their performance. We conduct a psychophysical experiment based on a direct comparison between the appearance of real-world scenes and HDR images of these scenes displayed on a low dynamic range monitor. In our experiment, HDR images are tone mapped by seven existing tone mapping operators. The primary interest of this psychophysical experiment is to assess the di&#198;erences in how tone mapped images are perceived by human observers and to &#216;nd out which attributes of image appearance account for these di&#198;erences when tone mapped images are compared directly with their corresponding real-world scenes rather than with each other. The human subjects rate image naturalness, overall contrast, overall brightness, and detail reproduction in dark and bright image regions with respect to the corresponding real-world scene. The results indicate substantial di&#198;erences in perception of images produced by individual tone mapping operators. We observe a clear distinction between global and local operators in favor of the latter, and we classify the tone mapping operators according to naturalness and appearance attributes.ce attributes. %B Human Vision and Electronic Imaging X, IS&T/SPIE's 17th Annual Symposium on Electronic Imaging (2005) %P 192 - 203 %I SPIE %B SPIE Proceedings Series %@ false
2004
Dmitriev, K., Annen, T., Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2004. A CAVE System for Interactive Modeling of Global Illumination in Car Interior. ACM Symposium on Virtual Reality Software and Technology (VRST 2004), ACM.
Abstract
Global illumination dramatically improves realistic appearance of rendered scenes, but usually it is neglected in VR systems due to its high costs. In this work we present an efficient global illumination solution specifically tailored for those CAVE applications, which require an immediate response for dynamic light changes and allow for free motion of the observer, but involve scenes with static geometry. As an application example we choose the car interior modeling under free driving conditions. We illuminate the car using dynamically changing High Dynamic Range (HDR) environment maps and use the Precomputed Radiance Transfer (PRT) method for the global illumination computation. We leverage the PRT method to handle scenes with non-trivial topology represented by complex meshes. Also, we propose a hybrid of PRT and final gathering approach for high-quality rendering of objects with complex Bi-directional Reflectance Distribution Function (BRDF). We use this method for predictive rendering of the navigation LCD panel based on its measured BRDF. Since the global illumination computation leads to HDR images we propose a tone mapping algorithm tailored specifically for the CAVE. We employ head tracking to identify the observed screen region and derive for it proper luminance adaptation conditions, which are then used for tone mapping on all walls in the CAVE. We distribute our global illumination and tone mapping computation on all CPUs and GPUs available in the CAVE, which enables us to achieve interactive performance even for the costly final gathering approach.
Export
BibTeX
@inproceedings{dmitriev04acs, TITLE = {A {CAVE} System for Interactive Modeling of Global Illumination in Car Interior}, AUTHOR = {Dmitriev, Kirill and Annen, Thomas and Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Lau, Rynson and Baciu, George}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-9738E2CF6F79F214C1256F5E004819E6-dmitriev04acs}, PUBLISHER = {ACM}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Global illumination dramatically improves realistic appearance of rendered scenes, but usually it is neglected in VR systems due to its high costs. In this work we present an efficient global illumination solution specifically tailored for those CAVE applications, which require an immediate response for dynamic light changes and allow for free motion of the observer, but involve scenes with static geometry. As an application example we choose the car interior modeling under free driving conditions. We illuminate the car using dynamically changing High Dynamic Range (HDR) environment maps and use the Precomputed Radiance Transfer (PRT) method for the global illumination computation. We leverage the PRT method to handle scenes with non-trivial topology represented by complex meshes. Also, we propose a hybrid of PRT and final gathering approach for high-quality rendering of objects with complex Bi-directional Reflectance Distribution Function (BRDF). We use this method for predictive rendering of the navigation LCD panel based on its measured BRDF. Since the global illumination computation leads to HDR images we propose a tone mapping algorithm tailored specifically for the CAVE. We employ head tracking to identify the observed screen region and derive for it proper luminance adaptation conditions, which are then used for tone mapping on all walls in the CAVE. We distribute our global illumination and tone mapping computation on all CPUs and GPUs available in the CAVE, which enables us to achieve interactive performance even for the costly final gathering approach.}, BOOKTITLE = {ACM Symposium on Virtual Reality Software and Technology (VRST 2004)}, PAGES = {137--145}, }
Endnote
%0 Conference Proceedings %A Dmitriev, Kirill %A Annen, Thomas %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %E Lau, Rynson %E Baciu, George %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Max Planck Society %T A CAVE System for Interactive Modeling of Global Illumination in Car Interior : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-29FF-A %F EDOC: 231986 %F OTHER: Local-ID: C125675300671F7B-9738E2CF6F79F214C1256F5E004819E6-dmitriev04acs %I ACM %D 2004 %B Untitled Event %Z date of event: 2004-11-10 - %C Hong Kong %X Global illumination dramatically improves realistic appearance of rendered scenes, but usually it is neglected in VR systems due to its high costs. In this work we present an efficient global illumination solution specifically tailored for those CAVE applications, which require an immediate response for dynamic light changes and allow for free motion of the observer, but involve scenes with static geometry. As an application example we choose the car interior modeling under free driving conditions. We illuminate the car using dynamically changing High Dynamic Range (HDR) environment maps and use the Precomputed Radiance Transfer (PRT) method for the global illumination computation. We leverage the PRT method to handle scenes with non-trivial topology represented by complex meshes. Also, we propose a hybrid of PRT and final gathering approach for high-quality rendering of objects with complex Bi-directional Reflectance Distribution Function (BRDF). We use this method for predictive rendering of the navigation LCD panel based on its measured BRDF. Since the global illumination computation leads to HDR images we propose a tone mapping algorithm tailored specifically for the CAVE. We employ head tracking to identify the observed screen region and derive for it proper luminance adaptation conditions, which are then used for tone mapping on all walls in the CAVE. We distribute our global illumination and tone mapping computation on all CPUs and GPUs available in the CAVE, which enables us to achieve interactive performance even for the costly final gathering approach. %B ACM Symposium on Virtual Reality Software and Technology (VRST 2004) %P 137 - 145 %I ACM
Ershov, S., Durikovic, R., Kolchin, K., and Myszkowski, K. 2004. Reverse engineering approach to appearance-based design of metallic and pearlescent paints. The Visual Computer 20.
Abstract
We propose a new approach to interactive design of metallic and pearlescent coatings, such as automotive paints and plastic finishes of electronic appliances. This approach includes solving the inverse problem, that is, finding pigment composition of a paint from its bidirectional reflectance distribution function (BRDF) based on a simple paint model. The inverse problem is solved by two consecutive optimizations calculated in realtime on a contemporary PC. Such reverse engineering can serve as a starting point for subsequent design of new paints in terms of appearance attributes that are directly connected to the physical parameters of our model. This allows the user to have a paint composition in parallel with the appearance being designed.
Export
BibTeX
@article{Ershov2004, TITLE = {Reverse engineering approach to appearance-based design of metallic and pearlescent paints}, AUTHOR = {Ershov, Sergey and Durikovic, Roman and Kolchin, Konstantin and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {0178-2789}, LOCALID = {Local-ID: C125675300671F7B-1ED9315CB8B336DAC1256F5C003D02C9-Ershov2004}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {We propose a new approach to interactive design of metallic and pearlescent coatings, such as automotive paints and plastic finishes of electronic appliances. This approach includes solving the inverse problem, that is, finding pigment composition of a paint from its bidirectional reflectance distribution function (BRDF) based on a simple paint model. The inverse problem is solved by two consecutive optimizations calculated in realtime on a contemporary PC. Such reverse engineering can serve as a starting point for subsequent design of new paints in terms of appearance attributes that are directly connected to the physical parameters of our model. This allows the user to have a paint composition in parallel with the appearance being designed.}, JOURNAL = {The Visual Computer}, VOLUME = {20}, PAGES = {587--600}, }
Endnote
%0 Journal Article %A Ershov, Sergey %A Durikovic, Roman %A Kolchin, Konstantin %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Reverse engineering approach to appearance-based design of metallic and pearlescent paints : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B2B-7 %F EDOC: 232050 %F OTHER: Local-ID: C125675300671F7B-1ED9315CB8B336DAC1256F5C003D02C9-Ershov2004 %D 2004 %* Review method: peer-reviewed %X We propose a new approach to interactive design of metallic and pearlescent coatings, such as automotive paints and plastic finishes of electronic appliances. This approach includes solving the inverse problem, that is, finding pigment composition of a paint from its bidirectional reflectance distribution function (BRDF) based on a simple paint model. The inverse problem is solved by two consecutive optimizations calculated in realtime on a contemporary PC. Such reverse engineering can serve as a starting point for subsequent design of new paints in terms of appearance attributes that are directly connected to the physical parameters of our model. This allows the user to have a paint composition in parallel with the appearance being designed. %J The Visual Computer %V 20 %& 587 %P 587 - 600 %@ false
Krawczyk, G., Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2004. Lightness Perception Inspired Tone Mapping. Proceedings APGV 2004, ACM.
Export
BibTeX
@inproceedings{Krawczyk2004, TITLE = {Lightness Perception Inspired Tone Mapping}, AUTHOR = {Krawczyk, Grzegorz and Mantiuk, Rafal and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, DOI = {10.1145/1012551.1012594}, LOCALID = {Local-ID: C125675300671F7B-07985C48329EC4DFC1256FC4002A5333-Krawczyk2004}, PUBLISHER = {ACM}, YEAR = {2004}, DATE = {2004}, BOOKTITLE = {Proceedings APGV 2004}, EDITOR = {Spencer, Stephen N.}, PAGES = {172--172}, ADDRESS = {Los Angeles, CA}, }
Endnote
%0 Conference Proceedings %A Krawczyk, Grzegorz %A Mantiuk, Rafal %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Lightness Perception Inspired Tone Mapping : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2543-E %F EDOC: 231335 %R 10.1145/1012551.1012594 %F OTHER: Local-ID: C125675300671F7B-07985C48329EC4DFC1256FC4002A5333-Krawczyk2004 %D 2004 %B 1st Symposium on Applied Perception in Graphics and Visualization %Z date of event: 2004-08-07 - 2004-08-08 %C Los Angeles, CA %B Proceedings APGV 2004 %E Spencer, Stephen N. %P 172 - 172 %I ACM
Mantiuk, R., Krawczyk, G., Myszkowski, K., and Seidel, H.-P. 2004a. Perception-motivated High Dynamic Range Video Encoding. ACM Transactions on Graphics 23.
Abstract
Due to rapid technological progress in high dynamic range (HDR) video capture and display, the efficient storage and transmission of such data is crucial for the completeness of any HDR imaging pipeline. We propose a new approach for inter-frame encoding of HDR video, which is embedded in the well-established MPEG-4 video compression standard. The key component of our technique is luminance quantization that is optimized for the contrast threshold perception in the human visual system. The quantization scheme requires only 10--11 bits to encode 12 orders of magnitude of visible luminance range and does not lead to perceivable contouring artifacts. Besides video encoding, the proposed quantization provides perceptually-optimized luminance sampling for fast implementation of any global tone mapping operator using a lookup table. To improve the quality of synthetic video sequences, we introduce a coding scheme for discrete cosine transform (DCT) blocks with high contrast. We demonstrate the capabilities of HDR video in a player, which enables decoding, tone mapping, and applying post-processing effects in real-time. The tone mapping algorithm as well as its parameters can be changed interactively while the video is playing. We can simulate post-processing effects such as glare, night vision, and motion blur, which appear very realistic due to the usage of HDR data.
Export
BibTeX
@article{Mantiuk2004HDREnc, TITLE = {Perception-motivated High Dynamic Range Video Encoding}, AUTHOR = {Mantiuk, Rafal and Krawczyk, Grzegorz and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Marks, Joe}, LANGUAGE = {eng}, ISSN = {0730-0301}, LOCALID = {Local-ID: C125675300671F7B-2BA4C8B1EE81007BC1256EC1003757E0-Mantiuk2004HDREnc}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Due to rapid technological progress in high dynamic range (HDR) video capture and display, the efficient storage and transmission of such data is crucial for the completeness of any HDR imaging pipeline. We propose a new approach for inter-frame encoding of HDR video, which is embedded in the well-established MPEG-4 video compression standard. The key component of our technique is luminance quantization that is optimized for the contrast threshold perception in the human visual system. The quantization scheme requires only 10--11 bits to encode 12 orders of magnitude of visible luminance range and does not lead to perceivable contouring artifacts. Besides video encoding, the proposed quantization provides perceptually-optimized luminance sampling for fast implementation of any global tone mapping operator using a lookup table. To improve the quality of synthetic video sequences, we introduce a coding scheme for discrete cosine transform (DCT) blocks with high contrast. We demonstrate the capabilities of HDR video in a player, which enables decoding, tone mapping, and applying post-processing effects in real-time. The tone mapping algorithm as well as its parameters can be changed interactively while the video is playing. We can simulate post-processing effects such as glare, night vision, and motion blur, which appear very realistic due to the usage of HDR data.}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {23}, PAGES = {733--741}, }
Endnote
%0 Journal Article %A Mantiuk, Rafal %A Krawczyk, Grzegorz %A Myszkowski, Karol %A Seidel, Hans-Peter %E Marks, Joe %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perception-motivated High Dynamic Range Video Encoding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2AFA-0 %F EDOC: 231948 %F OTHER: Local-ID: C125675300671F7B-2BA4C8B1EE81007BC1256EC1003757E0-Mantiuk2004HDREnc %D 2004 %* Review method: peer-reviewed %X Due to rapid technological progress in high dynamic range (HDR) video capture and display, the efficient storage and transmission of such data is crucial for the completeness of any HDR imaging pipeline. We propose a new approach for inter-frame encoding of HDR video, which is embedded in the well-established MPEG-4 video compression standard. The key component of our technique is luminance quantization that is optimized for the contrast threshold perception in the human visual system. The quantization scheme requires only 10--11 bits to encode 12 orders of magnitude of visible luminance range and does not lead to perceivable contouring artifacts. Besides video encoding, the proposed quantization provides perceptually-optimized luminance sampling for fast implementation of any global tone mapping operator using a lookup table. To improve the quality of synthetic video sequences, we introduce a coding scheme for discrete cosine transform (DCT) blocks with high contrast. We demonstrate the capabilities of HDR video in a player, which enables decoding, tone mapping, and applying post-processing effects in real-time. The tone mapping algorithm as well as its parameters can be changed interactively while the video is playing. We can simulate post-processing effects such as glare, night vision, and motion blur, which appear very realistic due to the usage of HDR data. %J ACM Transactions on Graphics %V 23 %& 733 %P 733 - 741 %@ false
Mantiuk, R., Myszkowski, K., and Seidel, H.-P. 2004b. Visible Difference Predictor for High Dynamic Range Images. 2004 IEEE International Conference on Systems, Man & Cybernetics (SMC 2004), IEEE.
Abstract
Since new imaging and rendering systems commonly use physically accurate lighting information in the form of High-Dynamic Range data, there is a need for an automatic visual quality assessment of the resulting images. In this work we extend the Visual Difference Predictor (VDP) developed by Daly to handle HDR data. This let us predict if a human observer is able to perceive differences for a pair of HDR images under the adaptation conditions corresponding to the real scene observation.
Export
BibTeX
@inproceedings{Mantiuk2004HDRVDP, TITLE = {Visible Difference Predictor for High Dynamic Range Images}, AUTHOR = {Mantiuk, Rafal and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {0-7803-8567-5}, DOI = {10.1109/ICSMC.2004.1400750}, LOCALID = {Local-ID: C125675300671F7B-4A5E8413EEF67127C1256F330053216A-Mantiuk2004HDRVDP}, PUBLISHER = {IEEE}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Since new imaging and rendering systems commonly use physically accurate lighting information in the form of High-Dynamic Range data, there is a need for an automatic visual quality assessment of the resulting images. In this work we extend the Visual Difference Predictor (VDP) developed by Daly to handle HDR data. This let us predict if a human observer is able to perceive differences for a pair of HDR images under the adaptation conditions corresponding to the real scene observation.}, BOOKTITLE = {2004 IEEE International Conference on Systems, Man \& Cybernetics (SMC 2004)}, EDITOR = {Thissen, Wil and Wieringa, Peter and Pantic, Maja and Ludema, Marcel}, PAGES = {2763--2769}, ADDRESS = {The Hague, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Rafal %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Visible Difference Predictor for High Dynamic Range Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B78-C %F EDOC: 231938 %R 10.1109/ICSMC.2004.1400750 %F OTHER: Local-ID: C125675300671F7B-4A5E8413EEF67127C1256F330053216A-Mantiuk2004HDRVDP %D 2004 %B 2004 IEEE International Conference on Systems, Man & Cybernetics %Z date of event: 2004-10-10 - 2004-10-13 %C The Hague, The Netherlands %X Since new imaging and rendering systems commonly use physically accurate lighting information in the form of High-Dynamic Range data, there is a need for an automatic visual quality assessment of the resulting images. In this work we extend the Visual Difference Predictor (VDP) developed by Daly to handle HDR data. This let us predict if a human observer is able to perceive differences for a pair of HDR images under the adaptation conditions corresponding to the real scene observation. %B 2004 IEEE International Conference on Systems, Man & Cybernetics %E Thissen, Wil; Wieringa, Peter; Pantic, Maja; Ludema, Marcel %P 2763 - 2769 %I IEEE %@ 0-7803-8567-5
Tawara, T., Myszkowski, K., and Seidel, H.-P. 2004a. Exploiting Temporal Coherence in Final Gathering for Dynamic Scenes. Computer Graphics International (CGI 2004), IEEE.
Abstract
Efficient global illumination computation in dynamically changing environments is an important practical problem. In high-quality animation rendering costly "final gathering" technique is commonly used. We extend this technique into temporal domain by exploiting coherence between the subsequent frames. For this purpose we store previously computed incoming radiance samples and refresh them evenly in space and time using some aging criteria. The approach is based upon a two-pass photon mapping algorithm with irradiance cache, but it can be applied also in other gathering methods. The algorithm significantly reduces the cost of expensive indirect lighting computation and suppresses temporal aliasing with respect to the state of the art frame-by-frame rendering techniques.
Export
BibTeX
@inproceedings{Tawara2004a, TITLE = {Exploiting Temporal Coherence in Final Gathering for Dynamic Scenes}, AUTHOR = {Tawara, Takehiro and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Cohen-Or, Daniel and Jain, Lakhmi and Magnenat-Thalmann, Nadia}, LANGUAGE = {eng}, ISBN = {0-7695-2171-1}, LOCALID = {Local-ID: C125675300671F7B-6EA03CEF62237DD0C1256E46006A8293-Tawara2004a}, PUBLISHER = {IEEE}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Efficient global illumination computation in dynamically changing environments is an important practical problem. In high-quality animation rendering costly "final gathering" technique is commonly used. We extend this technique into temporal domain by exploiting coherence between the subsequent frames. For this purpose we store previously computed incoming radiance samples and refresh them evenly in space and time using some aging criteria. The approach is based upon a two-pass photon mapping algorithm with irradiance cache, but it can be applied also in other gathering methods. The algorithm significantly reduces the cost of expensive indirect lighting computation and suppresses temporal aliasing with respect to the state of the art frame-by-frame rendering techniques.}, BOOKTITLE = {Computer Graphics International (CGI 2004)}, PAGES = {110--119}, }
Endnote
%0 Conference Proceedings %A Tawara, Takehiro %A Myszkowski, Karol %A Seidel, Hans-Peter %E Cohen-Or, Daniel %E Jain, Lakhmi %E Magnenat-Thalmann, Nadia %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Exploiting Temporal Coherence in Final Gathering for Dynamic Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2A93-5 %F EDOC: 231897 %F OTHER: Local-ID: C125675300671F7B-6EA03CEF62237DD0C1256E46006A8293-Tawara2004a %I IEEE %D 2004 %B Untitled Event %Z date of event: 2004-06-16 - %C Crete, Greece %X Efficient global illumination computation in dynamically changing environments is an important practical problem. In high-quality animation rendering costly "final gathering" technique is commonly used. We extend this technique into temporal domain by exploiting coherence between the subsequent frames. For this purpose we store previously computed incoming radiance samples and refresh them evenly in space and time using some aging criteria. The approach is based upon a two-pass photon mapping algorithm with irradiance cache, but it can be applied also in other gathering methods. The algorithm significantly reduces the cost of expensive indirect lighting computation and suppresses temporal aliasing with respect to the state of the art frame-by-frame rendering techniques. %B Computer Graphics International (CGI 2004) %P 110 - 119 %I IEEE %@ 0-7695-2171-1
Tawara, T., Myszkowski, K., and Seidel, H.-P. 2004b. Efficient Rendering of Strong Secondary Lighting in Photon Mapping Algorithm. Theory and Practice of Computer Graphics 2004, IEEE.
Abstract
In this paper we propose an efficient algorithm for handling strong secondary light sources within the photon mapping framework. We introduce an additional photon map as an implicit representation of such light sources. At the rendering stage this map is used for the explicit sampling of strong indirect lighting in a similar way as it is usually performed for primary light sources. Our technique works fully automatically, improves the computation performance, and leads to a better image quality than traditional rendering approaches.
Export
BibTeX
@inproceedings{Tawara2004c, TITLE = {Efficient Rendering of Strong Secondary Lighting in Photon Mapping Algorithm}, AUTHOR = {Tawara, Takehiro and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Lever, Paul G.}, LANGUAGE = {eng}, ISBN = {0-7695-2137-1}, LOCALID = {Local-ID: C125675300671F7B-9FD06C3F844A7B2EC1256E5C003A7515-Tawara2004c}, PUBLISHER = {IEEE}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {In this paper we propose an efficient algorithm for handling strong secondary light sources within the photon mapping framework. We introduce an additional photon map as an implicit representation of such light sources. At the rendering stage this map is used for the explicit sampling of strong indirect lighting in a similar way as it is usually performed for primary light sources. Our technique works fully automatically, improves the computation performance, and leads to a better image quality than traditional rendering approaches.}, BOOKTITLE = {Theory and Practice of Computer Graphics 2004}, PAGES = {174--178}, }
Endnote
%0 Conference Proceedings %A Tawara, Takehiro %A Myszkowski, Karol %A Seidel, Hans-Peter %E Lever, Paul G. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Rendering of Strong Secondary Lighting in Photon Mapping Algorithm : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2A80-F %F EDOC: 231931 %F OTHER: Local-ID: C125675300671F7B-9FD06C3F844A7B2EC1256E5C003A7515-Tawara2004c %I IEEE %D 2004 %B Untitled Event %Z date of event: 2004-06-08 - %C University of Bournemouth, UK %X In this paper we propose an efficient algorithm for handling strong secondary light sources within the photon mapping framework. We introduce an additional photon map as an implicit representation of such light sources. At the rendering stage this map is used for the explicit sampling of strong indirect lighting in a similar way as it is usually performed for primary light sources. Our technique works fully automatically, improves the computation performance, and leads to a better image quality than traditional rendering approaches. %B Theory and Practice of Computer Graphics 2004 %P 174 - 178 %I IEEE %@ 0-7695-2137-1
Tawara, T., Myszkowski, K., Dmitriev, K., Havran, V., Damez, C., and Seidel, H.-P. 2004c. Exploiting Temporal Coherence in Global Illumination (an invited paper). Spring Conference on Computer Graphics (SCCG 2004), ACM.
Abstract
Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a number of global illumination and rendering solutions that exploit temporal coherence in lighting distribution for subsequent frames to improve the computation performance and overall animation quality. Our strategy relies on extending into temporal domain well-known global illumination techniques such as density estimation photon tracing, photon mapping, and bi-directional path tracing, which were originally designed to handle static scenes only.
Export
BibTeX
@inproceedings{Tawara2004b, TITLE = {Exploiting Temporal Coherence in Global Illumination (an invited paper)}, AUTHOR = {Tawara, Takehiro and Myszkowski, Karol and Dmitriev, Kirill and Havran, Vlastimil and Damez, Cyrille and Seidel, Hans-Peter}, EDITOR = {Pasko, Alexander}, LANGUAGE = {eng}, ISBN = {1-58113-914-4}, LOCALID = {Local-ID: C125675300671F7B-6088B687D952F1E4C1256EC1002F0C62-Tawara2004b}, PUBLISHER = {ACM}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a number of global illumination and rendering solutions that exploit temporal coherence in lighting distribution for subsequent frames to improve the computation performance and overall animation quality. Our strategy relies on extending into temporal domain well-known global illumination techniques such as density estimation photon tracing, photon mapping, and bi-directional path tracing, which were originally designed to handle static scenes only.}, BOOKTITLE = {Spring Conference on Computer Graphics (SCCG 2004)}, PAGES = {23--33}, }
Endnote
%0 Conference Proceedings %A Tawara, Takehiro %A Myszkowski, Karol %A Dmitriev, Kirill %A Havran, Vlastimil %A Damez, Cyrille %A Seidel, Hans-Peter %E Pasko, Alexander %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Exploiting Temporal Coherence in Global Illumination (an invited paper) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2A95-1 %F EDOC: 231906 %F OTHER: Local-ID: C125675300671F7B-6088B687D952F1E4C1256EC1002F0C62-Tawara2004b %I ACM %D 2004 %B Untitled Event %Z date of event: 2004-04-22 - %C Budmerice, Slovakia %X Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a number of global illumination and rendering solutions that exploit temporal coherence in lighting distribution for subsequent frames to improve the computation performance and overall animation quality. Our strategy relies on extending into temporal domain well-known global illumination techniques such as density estimation photon tracing, photon mapping, and bi-directional path tracing, which were originally designed to handle static scenes only. %B Spring Conference on Computer Graphics (SCCG 2004) %P 23 - 33 %I ACM %@ 1-58113-914-4
Weber, M., Milch, M., Myszkowski, K., Dmitriev, K., Rokita, P., and Seidel, H.-P. 2004. Spatio-Temporal Photon Density Estimation Using Bilateral Filtering. Computer Graphics International (CGI 2004), IEEE.
Abstract
Photon tracing and density estimation are well established techniques in global illumination computation and rendering of high-quality animation sequences. Using traditional density estimation techniques it is difficult to remove stochastic noise inherent for photon-based methods while avoiding overblurring lighting details. In this paper we investigate the use of bilateral filtering for lighting reconstruction based on the local density of photon hit points. Bilateral filtering is applied in spatio-temporal domain and provides control over the level-of-details in reconstructed lighting. All changes of lighting below this level are treated as stochastic noise and are suppressed. Bilateral filtering proves to be efficient in preserving sharp features in lighting which is in particular important for high-quality caustic reconstruction. Also, flickering between subsequent animation frames is substantially reduced due to extending bilateral filtering into temporal domain.
Export
BibTeX
@inproceedings{Weber2004, TITLE = {Spatio-Temporal Photon Density Estimation Using Bilateral Filtering}, AUTHOR = {Weber, Markus and Milch, Marco and Myszkowski, Karol and Dmitriev, Kirill and Rokita, Przemyslaw and Seidel, Hans-Peter}, EDITOR = {Cohen-Or, Daniel and Jain, Lakhmi and Magnenat-Thalmann, Nadia}, LANGUAGE = {eng}, ISBN = {0-7695-2171-1}, LOCALID = {Local-ID: C125675300671F7B-E7C820E451C4356AC1256E46006AB0DF-Weber2004}, PUBLISHER = {IEEE}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Photon tracing and density estimation are well established techniques in global illumination computation and rendering of high-quality animation sequences. Using traditional density estimation techniques it is difficult to remove stochastic noise inherent for photon-based methods while avoiding overblurring lighting details. In this paper we investigate the use of bilateral filtering for lighting reconstruction based on the local density of photon hit points. Bilateral filtering is applied in spatio-temporal domain and provides control over the level-of-details in reconstructed lighting. All changes of lighting below this level are treated as stochastic noise and are suppressed. Bilateral filtering proves to be efficient in preserving sharp features in lighting which is in particular important for high-quality caustic reconstruction. Also, flickering between subsequent animation frames is substantially reduced due to extending bilateral filtering into temporal domain.}, BOOKTITLE = {Computer Graphics International (CGI 2004)}, PAGES = {120--127}, }
Endnote
%0 Conference Proceedings %A Weber, Markus %A Milch, Marco %A Myszkowski, Karol %A Dmitriev, Kirill %A Rokita, Przemyslaw %A Seidel, Hans-Peter %E Cohen-Or, Daniel %E Jain, Lakhmi %E Magnenat-Thalmann, Nadia %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Spatio-Temporal Photon Density Estimation Using Bilateral Filtering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B40-6 %F EDOC: 231920 %F OTHER: Local-ID: C125675300671F7B-E7C820E451C4356AC1256E46006AB0DF-Weber2004 %I IEEE %D 2004 %B Untitled Event %Z date of event: 2004-06-16 - %C Crete, Greece %X Photon tracing and density estimation are well established techniques in global illumination computation and rendering of high-quality animation sequences. Using traditional density estimation techniques it is difficult to remove stochastic noise inherent for photon-based methods while avoiding overblurring lighting details. In this paper we investigate the use of bilateral filtering for lighting reconstruction based on the local density of photon hit points. Bilateral filtering is applied in spatio-temporal domain and provides control over the level-of-details in reconstructed lighting. All changes of lighting below this level are treated as stochastic noise and are suppressed. Bilateral filtering proves to be efficient in preserving sharp features in lighting which is in particular important for high-quality caustic reconstruction. Also, flickering between subsequent animation frames is substantially reduced due to extending bilateral filtering into temporal domain. %B Computer Graphics International (CGI 2004) %P 120 - 127 %I IEEE %@ 0-7695-2171-1
2003
Damez, C., Dmitriev, K., and Myszkowski, K. 2003. State of the Art for Global Illumination in Interactive Applications and High-Quality Animations. Computer Graphics Forum 22.
Abstract
Global illumination algorithms are regarded as computationally intensive. This cost is a practical problem when producing animations or when interactions with complex models are required. Several algorithms have been proposed to address this issue. Roughly, two families of methods can be distinguished. The first one aims at providing interactive feedback for lighting design applications. The second one gives higher priority to the quality of results, and therefore relies on offline computations. Recently, impressive advances have been made in both categories. In this report, we present a survey and classification of the most up-to-date of these methods.
Export
BibTeX
@article{DDM2003, TITLE = {State of the Art for Global Illumination in Interactive Applications and High-Quality Animations}, AUTHOR = {Damez, Cyrille and Dmitriev, Kirill and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {0167-7055}, LOCALID = {Local-ID: C125675300671F7B-884A78970185CB39C1256D030043DEC6-DDM2003}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Global illumination algorithms are regarded as computationally intensive. This cost is a practical problem when producing animations or when interactions with complex models are required. Several algorithms have been proposed to address this issue. Roughly, two families of methods can be distinguished. The first one aims at providing interactive feedback for lighting design applications. The second one gives higher priority to the quality of results, and therefore relies on offline computations. Recently, impressive advances have been made in both categories. In this report, we present a survey and classification of the most up-to-date of these methods.}, JOURNAL = {Computer Graphics Forum}, VOLUME = {22}, PAGES = {55--77}, }
Endnote
%0 Journal Article %A Damez, Cyrille %A Dmitriev, Kirill %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T State of the Art for Global Illumination in Interactive Applications and High-Quality Animations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2E2E-B %F EDOC: 202039 %F OTHER: Local-ID: C125675300671F7B-884A78970185CB39C1256D030043DEC6-DDM2003 %D 2003 %* Review method: peer-reviewed %X Global illumination algorithms are regarded as computationally intensive. This cost is a practical problem when producing animations or when interactions with complex models are required. Several algorithms have been proposed to address this issue. Roughly, two families of methods can be distinguished. The first one aims at providing interactive feedback for lighting design applications. The second one gives higher priority to the quality of results, and therefore relies on offline computations. Recently, impressive advances have been made in both categories. In this report, we present a survey and classification of the most up-to-date of these methods. %J Computer Graphics Forum %V 22 %& 55 %P 55 - 77 %@ false
Drago, F., Martens, W., Myszkowski, K., and Chiba, N. 2003a. Design of a Tone Mapping Operator for High Dynamic Range Images Based upon Psychophysical Evaluation and Preference Mapping. Human Vision and Electronic Imaging VIII (HVEI-03), SPIE.
Abstract
A tone mapping algorithm for displaying high contrast scenes was designed on the basis of the results of experimental tests using human subjects. Systematic perceptual evaluation of several existing tone mapping techniques revealed that the most natural'' appearance was determined by the presence in the output image of detailed scenery features often made visible by limiting contrast and by properly reproducing brightness. Taking these results into account, we developed a system to produce images close to the ideal preference point for high dynamic range input image data. Of the algorithms that we tested, only the Retinex algorithm was capable of retrieving detailed scene features hidden in high luminance areas while still preserving a good contrast level. This paper presents changes made to Retinex algorithm for processing high dynamic range images, and a further integration of the Retinex with specialized tone mapping algorithms that enables the production of images that appear as similar as possible to the viewer's perception of actual scenes.
Export
BibTeX
@inproceedings{Myszkowski2003, TITLE = {Design of a Tone Mapping Operator for High Dynamic Range Images Based upon Psychophysical Evaluation and Preference Mapping}, AUTHOR = {Drago, Frederic and Martens, William and Myszkowski, Karol and Chiba, Norishige}, EDITOR = {Rogowitz, Bernice and Pappas, Thrasyvoulos}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-AC62EE60325404E4C1256CE8006BA646-Myszkowski2003}, PUBLISHER = {SPIE}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {A tone mapping algorithm for displaying high contrast scenes was designed on the basis of the results of experimental tests using human subjects. Systematic perceptual evaluation of several existing tone mapping techniques revealed that the most natural'' appearance was determined by the presence in the output image of detailed scenery features often made visible by limiting contrast and by properly reproducing brightness. Taking these results into account, we developed a system to produce images close to the ideal preference point for high dynamic range input image data. Of the algorithms that we tested, only the Retinex algorithm was capable of retrieving detailed scene features hidden in high luminance areas while still preserving a good contrast level. This paper presents changes made to Retinex algorithm for processing high dynamic range images, and a further integration of the Retinex with specialized tone mapping algorithms that enables the production of images that appear as similar as possible to the viewer's perception of actual scenes.}, BOOKTITLE = {Human Vision and Electronic Imaging VIII (HVEI-03)}, PAGES = {321--331}, SERIES = {SPIE proceedings}, ADDRESS = {Santa Clara, USA}, }
Endnote
%0 Conference Proceedings %A Drago, Frederic %A Martens, William %A Myszkowski, Karol %A Chiba, Norishige %E Rogowitz, Bernice %E Pappas, Thrasyvoulos %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Max Planck Society %T Design of a Tone Mapping Operator for High Dynamic Range Images Based upon Psychophysical Evaluation and Preference Mapping : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2CB4-B %F EDOC: 201868 %F OTHER: Local-ID: C125675300671F7B-AC62EE60325404E4C1256CE8006BA646-Myszkowski2003 %D 2003 %B HVEI 2003 %Z date of event: 2003-01-21 - 2003-01-23 %C Santa Clara, USA %X A tone mapping algorithm for displaying high contrast scenes was designed on the basis of the results of experimental tests using human subjects. Systematic perceptual evaluation of several existing tone mapping techniques revealed that the most natural'' appearance was determined by the presence in the output image of detailed scenery features often made visible by limiting contrast and by properly reproducing brightness. Taking these results into account, we developed a system to produce images close to the ideal preference point for high dynamic range input image data. Of the algorithms that we tested, only the Retinex algorithm was capable of retrieving detailed scene features hidden in high luminance areas while still preserving a good contrast level. This paper presents changes made to Retinex algorithm for processing high dynamic range images, and a further integration of the Retinex with specialized tone mapping algorithms that enables the production of images that appear as similar as possible to the viewer's perception of actual scenes. %B Human Vision and Electronic Imaging VIII (HVEI-03) %P 321 - 331 %I SPIE %B SPIE proceedings
Drago, F., Myszkowski, K., Annen, T., and Chiba, N. 2003b. Adaptive Logarithmic Mapping For Displaying High Contrast Scenes. EUROGRAPHICS 2003 (EUROGRAPHICS-03) : the European Association for Computer Graphics, 24th Annual Conference, Blackwell.
Abstract
We propose a fast, high quality tone mapping technique to display high contrast images on devices with limited dynamic range of luminance values. The method is based on logarithmic compression of luminance values, imitating the human response to light. A bias power function is introduced to adaptively vary logarithmic bases, resulting in good preservation of details and contrast. To improve contrast in dark areas, changes to the gamma correction procedure are proposed. Our adaptive logarithmic mapping technique is capable of producing perceptually tuned images with high dynamic content and works at interactive speeds. We demonstrate a successful application of our technique to a high dynamic range video player which enables to adjust optimal viewing conditions for any kind of display while taking into account the user preferences concerning brightness, contrast compression, and detail reproduction.
Export
BibTeX
@inproceedings{Drago2003b, TITLE = {Adaptive Logarithmic Mapping For Displaying High Contrast Scenes}, AUTHOR = {Drago, Frederic and Myszkowski, Karol and Annen, Thomas and Chiba, Norishige}, EDITOR = {Brunet, Pere and Fellner, Dieter W.}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-53A4B81D590A3EEAC1256CFD003CE441-Drago2003b}, PUBLISHER = {Blackwell}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We propose a fast, high quality tone mapping technique to display high contrast images on devices with limited dynamic range of luminance values. The method is based on logarithmic compression of luminance values, imitating the human response to light. A bias power function is introduced to adaptively vary logarithmic bases, resulting in good preservation of details and contrast. To improve contrast in dark areas, changes to the gamma correction procedure are proposed. Our adaptive logarithmic mapping technique is capable of producing perceptually tuned images with high dynamic content and works at interactive speeds. We demonstrate a successful application of our technique to a high dynamic range video player which enables to adjust optimal viewing conditions for any kind of display while taking into account the user preferences concerning brightness, contrast compression, and detail reproduction.}, BOOKTITLE = {EUROGRAPHICS 2003 (EUROGRAPHICS-03) : the European Association for Computer Graphics, 24th Annual Conference}, PAGES = {419--426}, SERIES = {Computer Graphics Forum}, ADDRESS = {Granada, Spain}, }
Endnote
%0 Conference Proceedings %A Drago, Frederic %A Myszkowski, Karol %A Annen, Thomas %A Chiba, Norishige %E Brunet, Pere %E Fellner, Dieter W. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Adaptive Logarithmic Mapping For Displaying High Contrast Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2BEF-F %F EDOC: 201862 %F OTHER: Local-ID: C125675300671F7B-53A4B81D590A3EEAC1256CFD003CE441-Drago2003b %D 2003 %B EUROGRAPHICS 2003 %Z date of event: 2003-09-01 - 2003-09-05 %C Granada, Spain %X We propose a fast, high quality tone mapping technique to display high contrast images on devices with limited dynamic range of luminance values. The method is based on logarithmic compression of luminance values, imitating the human response to light. A bias power function is introduced to adaptively vary logarithmic bases, resulting in good preservation of details and contrast. To improve contrast in dark areas, changes to the gamma correction procedure are proposed. Our adaptive logarithmic mapping technique is capable of producing perceptually tuned images with high dynamic content and works at interactive speeds. We demonstrate a successful application of our technique to a high dynamic range video player which enables to adjust optimal viewing conditions for any kind of display while taking into account the user preferences concerning brightness, contrast compression, and detail reproduction. %B EUROGRAPHICS 2003 (EUROGRAPHICS-03) : the European Association for Computer Graphics, 24th Annual Conference %P 419 - 426 %I Blackwell %B Computer Graphics Forum
Havran, V., Damez, C., Myszkowski, K., and Seidel, H.-P. 2003. An Efficient Spatio-Temporal Architecture for Animation Rendering. Rendering Techniques 2003 : 14th Eurographics Workshop on Rendering, ACM.
Abstract
Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a rendering architecture for computing multiple frames at once by exploiting the coherence between image samples in the temporal domain. For each sample representing a given point in the scene we update its view-dependent components for each frame and add its contribution to pixels identified through the compensation of camera and object motion. This leads naturally to a high quality motion blur and significantly reduces the cost of illumination computations. The required visibility information is provided using a custom ray tracing acceleration data structure for multiple frames simultaneously. We demonstrate that precise and costly global illumination techniques such as bidirectional path tracing become affordable in this rendering architecture.
Export
BibTeX
@inproceedings{Havran2003:EGSR, TITLE = {An Efficient Spatio-Temporal Architecture for Animation Rendering}, AUTHOR = {Havran, Vlastimil and Damez, Cyrille and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Christensen, Per and Cohen-Or, Daniel}, LANGUAGE = {eng}, ISBN = {1-58113-754-0}, LOCALID = {Local-ID: C125675300671F7B-375DE41ADBC27783C1256D2500414C13-Havran2003:EGSR}, PUBLISHER = {ACM}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a rendering architecture for computing multiple frames at once by exploiting the coherence between image samples in the temporal domain. For each sample representing a given point in the scene we update its view-dependent components for each frame and add its contribution to pixels identified through the compensation of camera and object motion. This leads naturally to a high quality motion blur and significantly reduces the cost of illumination computations. The required visibility information is provided using a custom ray tracing acceleration data structure for multiple frames simultaneously. We demonstrate that precise and costly global illumination techniques such as bidirectional path tracing become affordable in this rendering architecture.}, BOOKTITLE = {Rendering Techniques 2003 : 14th Eurographics Workshop on Rendering}, PAGES = {106--117}, ADDRESS = {Leuven, Belgium}, }
Endnote
%0 Conference Proceedings %A Havran, Vlastimil %A Damez, Cyrille %A Myszkowski, Karol %A Seidel, Hans-Peter %E Christensen, Per %E Cohen-Or, Daniel %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T An Efficient Spatio-Temporal Architecture for Animation Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2C20-6 %F EDOC: 201824 %F OTHER: Local-ID: C125675300671F7B-375DE41ADBC27783C1256D2500414C13-Havran2003:EGSR %D 2003 %B Rendering Techniques 2003 %Z date of event: 2003-06-25 - 2003-06-27 %C Leuven, Belgium %X Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a rendering architecture for computing multiple frames at once by exploiting the coherence between image samples in the temporal domain. For each sample representing a given point in the scene we update its view-dependent components for each frame and add its contribution to pixels identified through the compensation of camera and object motion. This leads naturally to a high quality motion blur and significantly reduces the cost of illumination computations. The required visibility information is provided using a custom ray tracing acceleration data structure for multiple frames simultaneously. We demonstrate that precise and costly global illumination techniques such as bidirectional path tracing become affordable in this rendering architecture. %B Rendering Techniques 2003 : 14th Eurographics Workshop on Rendering %P 106 - 117 %I ACM %@ 1-58113-754-0
Mantiuk, R., Myszkowski, K., and Pattanaik, S. 2003. Attention Guided MPEG Compression for Computer Animations. Proceedings of the 19th Spring Conference on Computer Graphics 2003 (SCCG 03), ACM.
Export
BibTeX
@inproceedings{Mantiuk2003b, TITLE = {Attention Guided {MPEG} Compression for Computer Animations}, AUTHOR = {Mantiuk, Rafal and Myszkowski, Karol and Pattanaik, Sumant}, EDITOR = {Joy, Kenneth I.}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-19ABC3A0ED74A809C1256CFD003DB24F-Mantiuk2003b}, PUBLISHER = {ACM}, YEAR = {2003}, DATE = {2003}, BOOKTITLE = {Proceedings of the 19th Spring Conference on Computer Graphics 2003 (SCCG 03)}, PAGES = {262--267}, ADDRESS = {Budmerice, Slovakia}, }
Endnote
%0 Conference Proceedings %A Mantiuk, Rafal %A Myszkowski, Karol %A Pattanaik, Sumant %E Joy, Kenneth I. %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society %T Attention Guided MPEG Compression for Computer Animations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2C52-7 %F EDOC: 202006 %F OTHER: Local-ID: C125675300671F7B-19ABC3A0ED74A809C1256CFD003DB24F-Mantiuk2003b %D 2003 %B SCCG 2003 %Z date of event: 2003-04-24 - 2003-04-26 %C Budmerice, Slovakia %B Proceedings of the 19th Spring Conference on Computer Graphics 2003 (SCCG 03) %P 262 - 267 %I ACM
2002
Damez, C., Dmitriev, K., and Myszkowski, K. 2002. Global Illumination for Interactive Applications and High-Quality Animations. Eurographics 2002: State of the Art Reports, Eurographics.
Export
BibTeX
@inproceedings{Damez2002, TITLE = {Global Illumination for Interactive Applications and High-Quality Animations}, AUTHOR = {Damez, Cyrille and Dmitriev, Kirill and Myszkowski, Karol}, EDITOR = {Fellner, Dieter and Scopignio, Roberto}, LANGUAGE = {eng}, ISSN = {1017-4565}, LOCALID = {Local-ID: C125675300671F7B-96B7968CB1A20486C1256C3600327527-Damez2002}, PUBLISHER = {Eurographics}, YEAR = {2002}, DATE = {2002}, BOOKTITLE = {Eurographics 2002: State of the Art Reports}, PAGES = {1--24}, ADDRESS = {Saarbr{\"u}cken, Germany}, }
Endnote
%0 Conference Proceedings %A Damez, Cyrille %A Dmitriev, Kirill %A Myszkowski, Karol %E Fellner, Dieter %E Scopignio, Roberto %+ Computer Graphics, MPI for Informatics, Max Planck Society Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Global Illumination for Interactive Applications and High-Quality Animations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2F9F-3 %F EDOC: 202132 %F OTHER: Local-ID: C125675300671F7B-96B7968CB1A20486C1256C3600327527-Damez2002 %D 2002 %B EUROGRAPHICS 2002 STAR %Z date of event: 2002-09-02 - 2002-09-06 %C Saarbr&#252;cken, Germany %B Eurographics 2002: State of the Art Reports %P 1 - 24 %I Eurographics %@ false
Dmitriev, K., Brabec, S., Myszkowski, K., and Seidel, H.-P. 2002. Interactive Global Illumination Using Selective Photon Tracing. Proceedings of the 13th Eurographics Workshop on Rendering, Eurographics/ACM.
Export
BibTeX
@inproceedings{Dmitriev2002, TITLE = {Interactive Global Illumination Using Selective Photon Tracing}, AUTHOR = {Dmitriev, Kirill and Brabec, Stefan and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Debevec, Paul and Gibson, Simon}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-5D4014450BF5525BC1256C360028CE0D-Dmitriev2002}, PUBLISHER = {Eurographics/ACM}, YEAR = {2002}, DATE = {2002}, BOOKTITLE = {Proceedings of the 13th Eurographics Workshop on Rendering}, PAGES = {21--33}, ADDRESS = {Pisa, Italy}, }
Endnote
%0 Conference Proceedings %A Dmitriev, Kirill %A Brabec, Stefan %A Myszkowski, Karol %A Seidel, Hans-Peter %E Debevec, Paul %E Gibson, Simon %+ Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive Global Illumination Using Selective Photon Tracing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2FBC-1 %F EDOC: 202130 %F OTHER: Local-ID: C125675300671F7B-5D4014450BF5525BC1256C360028CE0D-Dmitriev2002 %D 2002 %B Rendering Techniques 2002 %Z date of event: 2002-06-26 - 2002-07-28 %C Pisa, Italy %B Proceedings of the 13th Eurographics Workshop on Rendering %P 21 - 33 %I Eurographics/ACM
Drago, F., Martens, W., Myszkowski, K., and Seidel, H.-P. 2002. Perceptual evaluation of tone mapping operators with regard to similarity and preference. Max-Planck-Institut für Informatik, Saarbrücken.
Abstract
Seven tone mapping methods currently available to display high dynamic range images were submitted to perceptual evaluation in order to find the attributes most predictive of the success of a robust all-around tone mapping algorithm. The two most salient Stimulus Space dimensions underlying the perception of a set of images produced by six of the tone mappings were revealed using INdividual Differences SCALing (INDSCAL) analysis; and an ideal preference point within the INDSCAL-derived Stimulus Space was determined for a group of 11 observers using PREFerence MAPping (PREFMAP) analysis. Interpretation of the INDSCAL results was aided by pairwise comparisons of images that led to an ordering of the images according to which were more or less natural looking.
Export
BibTeX
@techreport{DragoMartensMyszkowskiSeidel2002, TITLE = {Perceptual evaluation of tone mapping operators with regard to similarity and preference}, AUTHOR = {Drago, Frederic and Martens, William and Myszkowski, Karol and Seidel, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-4-002}, NUMBER = {MPI-I-2002-4-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {Seven tone mapping methods currently available to display high dynamic range images were submitted to perceptual evaluation in order to find the attributes most predictive of the success of a robust all-around tone mapping algorithm. The two most salient Stimulus Space dimensions underlying the perception of a set of images produced by six of the tone mappings were revealed using INdividual Differences SCALing (INDSCAL) analysis; and an ideal preference point within the INDSCAL-derived Stimulus Space was determined for a group of 11 observers using PREFerence MAPping (PREFMAP) analysis. Interpretation of the INDSCAL results was aided by pairwise comparisons of images that led to an ordering of the images according to which were more or less natural looking.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Drago, Frederic %A Martens, William %A Myszkowski, Karol %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual evaluation of tone mapping operators with regard to similarity and preference : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C83-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-4-002 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2002 %P 30 p. %X Seven tone mapping methods currently available to display high dynamic range images were submitted to perceptual evaluation in order to find the attributes most predictive of the success of a robust all-around tone mapping algorithm. The two most salient Stimulus Space dimensions underlying the perception of a set of images produced by six of the tone mappings were revealed using INdividual Differences SCALing (INDSCAL) analysis; and an ideal preference point within the INDSCAL-derived Stimulus Space was determined for a group of 11 observers using PREFerence MAPping (PREFMAP) analysis. Interpretation of the INDSCAL results was aided by pairwise comparisons of images that led to an ordering of the images according to which were more or less natural looking. %B Research Report / Max-Planck-Institut f&#252;r Informatik
Myszkowski, K. 2002. Perception-Based Global Illumination, Rendering, and Animation Techniques. Proceedings of the 18th Spring Conference on Computer Graphics (SCCG 2002), ACM Siggraph.
Export
BibTeX
@inproceedings{MyszkowskiSCCG2002, TITLE = {Perception-Based Global Illumination, Rendering, and Animation Techniques}, AUTHOR = {Myszkowski, Karol}, EDITOR = {Chalmers, Alan}, LANGUAGE = {eng}, ISBN = {1-58113-608-0}, LOCALID = {Local-ID: C125675300671F7B-E4ADD3B275CD72ECC1256C3600371EFE-MyszkowskiSCCG2002}, PUBLISHER = {ACM Siggraph}, YEAR = {2002}, DATE = {2002}, BOOKTITLE = {Proceedings of the 18th Spring Conference on Computer Graphics (SCCG 2002)}, PAGES = {13--24}, ADDRESS = {Budmerice, Slovakia}, }
Endnote
%0 Conference Proceedings %A Myszkowski, Karol %E Chalmers, Alan %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Perception-Based Global Illumination, Rendering, and Animation Techniques : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-3027-B %F EDOC: 202222 %F OTHER: Local-ID: C125675300671F7B-E4ADD3B275CD72ECC1256C3600371EFE-MyszkowskiSCCG2002 %D 2002 %B SCCG 2002 %Z date of event: 2002-04-24 - 2002-04-27 %C Budmerice, Slovakia %B Proceedings of the 18th Spring Conference on Computer Graphics (SCCG 2002) %P 13 - 24 %I ACM Siggraph %@ 1-58113-608-0
Myszkowski, K., Tawara, T., and Seidel, H.-P. 2002. Using Animation Quality Metric to Improve Efficiency of Global Illumination Computation for Dynamic Environments. Proceedings of 7th SPIE Conference Human Vision and Electronic Imaging, SPIE - The International Society for Optical Engineering.
Export
BibTeX
@inproceedings{MyszkowskiSpie2002, TITLE = {Using Animation Quality Metric to Improve Efficiency of Global Illumination Computation for Dynamic Environments}, AUTHOR = {Myszkowski, Karol and Tawara, Takehiro and Seidel, Hans-Peter}, EDITOR = {Rogowitz, Bernice and Pappas, Thrasyvoulos}, LANGUAGE = {eng}, ISBN = {0-8194-4402-2}, LOCALID = {Local-ID: C125675300671F7B-3C349C0FFBBA9B5FC1256C36002A89AD-MyszkowskiSpie2002}, PUBLISHER = {SPIE -- The International Society for Optical Engineering}, YEAR = {2002}, DATE = {2002}, BOOKTITLE = {Proceedings of 7th SPIE Conference Human Vision and Electronic Imaging}, PAGES = {187--196}, SERIES = {SPIE Proceedings Series}, ADDRESS = {San Jose, USA}, }
Endnote
%0 Conference Proceedings %A Myszkowski, Karol %A Tawara, Takehiro %A Seidel, Hans-Peter %E Rogowitz, Bernice %E Pappas, Thrasyvoulos %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Using Animation Quality Metric to Improve Efficiency of Global Illumination Computation for Dynamic Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-30B6-B %F EDOC: 202177 %F OTHER: Local-ID: C125675300671F7B-3C349C0FFBBA9B5FC1256C36002A89AD-MyszkowskiSpie2002 %D 2002 %B Human Vision and Electronic Imaging %Z date of event: 2002-01-21 - 2002-01-24 %C San Jose, USA %B Proceedings of 7th SPIE Conference Human Vision and Electronic Imaging %P 187 - 196 %I SPIE - The International Society for Optical Engineering %@ 0-8194-4402-2 %B SPIE Proceedings Series
Tawara, T., Myszkowski, K., and Seidel, H.-P. 2002. Localizing the Final Gathering for Dynamic Scenes using the Photon Map. Proceedings of Vision, Modeling, and Visualization VMV 2002, Akademische Verlagsgesellschaft Aka GmbH.
Abstract
Rendering of high quality animations with global illumination effects is very costly using traditional techniques designed for static scenes. In this paper we present an extension of the photon mapping algorithm to handle dynamic environments. First, for each animation segment the static irradiance cache is computed only once for the scene with all dynamic objects removed. Then, for each frame, the dynamic objects are inserted and the irradiance cache is updated locally in the scene regions whose lighting is strongly affected by the objects. In the remaining scene regions the photon map is used to correct the irradiance values in the static cache. As a result the overall animation rendering efficiency is significantly improved and the temporal aliasing is reduced.
Export
BibTeX
@inproceedings{Tawara2002, TITLE = {Localizing the Final Gathering for Dynamic Scenes using the Photon Map}, AUTHOR = {Tawara, Takehiro and Myszkowski, Karol and Seidel, Hans-Peter}, EDITOR = {Greiner, G{\"u}nther and Niemann, Heinrich and Ertl, Thomas and Girod, Bernd and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {1-58603-302-6}, LOCALID = {Local-ID: C125675300671F7B-26CF9AFACE9BDEF4C1256C80005DCB2A-Tawara2002}, PUBLISHER = {Akademische Verlagsgesellschaft Aka GmbH}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {Rendering of high quality animations with global illumination effects is very costly using traditional techniques designed for static scenes. In this paper we present an extension of the photon mapping algorithm to handle dynamic environments. First, for each animation segment the static irradiance cache is computed only once for the scene with all dynamic objects removed. Then, for each frame, the dynamic objects are inserted and the irradiance cache is updated locally in the scene regions whose lighting is strongly affected by the objects. In the remaining scene regions the photon map is used to correct the irradiance values in the static cache. As a result the overall animation rendering efficiency is significantly improved and the temporal aliasing is reduced.}, BOOKTITLE = {Proceedings of Vision, Modeling, and Visualization VMV 2002}, PAGES = {69--76}, ADDRESS = {Erlangen, Germany}, }
Endnote
%0 Conference Proceedings %A Tawara, Takehiro %A Myszkowski, Karol %A Seidel, Hans-Peter %E Greiner, G&#252;nther %E Niemann, Heinrich %E Ertl, Thomas %E Girod, Bernd %E Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Localizing the Final Gathering for Dynamic Scenes using the Photon Map : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2FD2-D %F EDOC: 202197 %F OTHER: Local-ID: C125675300671F7B-26CF9AFACE9BDEF4C1256C80005DCB2A-Tawara2002 %D 2002 %B VMV 2002 %Z date of event: 2002-11-20 - 2002-11-22 %C Erlangen, Germany %X Rendering of high quality animations with global illumination effects is very costly using traditional techniques designed for static scenes. In this paper we present an extension of the photon mapping algorithm to handle dynamic environments. First, for each animation segment the static irradiance cache is computed only once for the scene with all dynamic objects removed. Then, for each frame, the dynamic objects are inserted and the irradiance cache is updated locally in the scene regions whose lighting is strongly affected by the objects. In the remaining scene regions the photon map is used to correct the irradiance values in the static cache. As a result the overall animation rendering efficiency is significantly improved and the temporal aliasing is reduced. %B Proceedings of Vision, Modeling, and Visualization VMV 2002 %P 69 - 76 %I Akademische Verlagsgesellschaft Aka GmbH %@ 1-58603-302-6
2001
Daubert, K., Lensch, H.P.A., Heidrich, W., and Seidel, H.-P. 2001. Efficient Cloth Modeling and Rendering. Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering, Springer.
Abstract
Realistic modeling and high-performance rendering of cloth and clothing is a challenging problem. Often these materials are seen at distances where individual stitches and knits can be made out and need to be accounted for. Modeling of the geometry at this level of detail fails due to sheer complexity, while simple texture mapping techniques do not produce the desired quality. In this paper, we describe an efficient and realistic approach that takes into account view-dependent effects such as small displacements causing occlusion and shadows, as well as illumination effects. The method is efficient in terms of memory consumption, and uses a combination of hardware and software rendering to achieve high performance. It is conceivable that future graphics hardware will be flexible enough for full hardware rendering of the proposed method.
Export
BibTeX
@inproceedings{Daubert2001, TITLE = {Efficient Cloth Modeling and Rendering}, AUTHOR = {Daubert, Katja and Lensch, Hendrik P. A. and Heidrich, Wolfgang and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {3-211-83709-4}, LOCALID = {Local-ID: C125675300671F7B-FBC662E15414073CC1256A7D00509B96-Daubert2001}, PUBLISHER = {Springer}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {Realistic modeling and high-performance rendering of cloth and clothing is a challenging problem. Often these materials are seen at distances where individual stitches and knits can be made out and need to be accounted for. Modeling of the geometry at this level of detail fails due to sheer complexity, while simple texture mapping techniques do not produce the desired quality. In this paper, we describe an efficient and realistic approach that takes into account view-dependent effects such as small displacements causing occlusion and shadows, as well as illumination effects. The method is efficient in terms of memory consumption, and uses a combination of hardware and software rendering to achieve high performance. It is conceivable that future graphics hardware will be flexible enough for full hardware rendering of the proposed method.}, BOOKTITLE = {Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering}, EDITOR = {Myszkowski, Karol}, PAGES = {63--70}, }
Endnote
%0 Conference Proceedings %A Daubert, Katja %A Lensch, Hendrik P. A. %A Heidrich, Wolfgang %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient Cloth Modeling and Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-327C-0 %F EDOC: 520206 %F OTHER: Local-ID: C125675300671F7B-FBC662E15414073CC1256A7D00509B96-Daubert2001 %I Springer %D 2001 %B Untitled Event %Z date of event: 2001-07-25 - 2001-07-27 %C London, Great Britain %X Realistic modeling and high-performance rendering of cloth and clothing is a challenging problem. Often these materials are seen at distances where individual stitches and knits can be made out and need to be accounted for. Modeling of the geometry at this level of detail fails due to sheer complexity, while simple texture mapping techniques do not produce the desired quality. In this paper, we describe an efficient and realistic approach that takes into account view-dependent effects such as small displacements causing occlusion and shadows, as well as illumination effects. The method is efficient in terms of memory consumption, and uses a combination of hardware and software rendering to achieve high performance. It is conceivable that future graphics hardware will be flexible enough for full hardware rendering of the proposed method. %B Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering %E Myszkowski, Karol %P 63 - 70 %I Springer %@ 3-211-83709-4
Drago, F. and Myszkowski, K. 2001. Validation Proposal for Global Illumination and Rendering Techniques. Computers & Graphics 25, 3.
Abstract
The goal of this study is to develop a complete set of data characterizing geometry, luminaires, and surfaces of a non-trivial existing environment for testing global illumination and rendering techniques. This paper briefly discusses the process of data acquisition. Also, the results of experiments on evaluating lighting simulation accuracy, and rendering fidelity for a Density Estimation Particle Tracing algorithm are presented. The importance of using the BRDF of surfaces in place of the more commonly used specular and diffuse reflectance coefficients is investigated for the test scene. The results obtained are contrasted with an artistic approach'' in which a skilled artist manually sets all reflectance characteristics to obtain a visually pleasant appearance that corresponds to the existing environment.
Export
BibTeX
@article{Myszkowski2001a, TITLE = {Validation Proposal for Global Illumination and Rendering Techniques}, AUTHOR = {Drago, Frederic and Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {0097-8493}, LOCALID = {Local-ID: C125675300671F7B-BD9D44D4C62EF5B8C1256A7D004D7794-Myszkowski2001a}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {The goal of this study is to develop a complete set of data characterizing geometry, luminaires, and surfaces of a non-trivial existing environment for testing global illumination and rendering techniques. This paper briefly discusses the process of data acquisition. Also, the results of experiments on evaluating lighting simulation accuracy, and rendering fidelity for a Density Estimation Particle Tracing algorithm are presented. The importance of using the BRDF of surfaces in place of the more commonly used specular and diffuse reflectance coefficients is investigated for the test scene. The results obtained are contrasted with an artistic approach'' in which a skilled artist manually sets all reflectance characteristics to obtain a visually pleasant appearance that corresponds to the existing environment.}, JOURNAL = {Computers \& Graphics}, VOLUME = {25}, NUMBER = {3}, PAGES = {511--518}, }
Endnote
%0 Journal Article %A Drago, Frederic %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Validation Proposal for Global Illumination and Rendering Techniques : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-32D6-2 %F EDOC: 520202 %F OTHER: Local-ID: C125675300671F7B-BD9D44D4C62EF5B8C1256A7D004D7794-Myszkowski2001a %D 2001 %* Review method: peer-reviewed %X The goal of this study is to develop a complete set of data characterizing geometry, luminaires, and surfaces of a non-trivial existing environment for testing global illumination and rendering techniques. This paper briefly discusses the process of data acquisition. Also, the results of experiments on evaluating lighting simulation accuracy, and rendering fidelity for a Density Estimation Particle Tracing algorithm are presented. The importance of using the BRDF of surfaces in place of the more commonly used specular and diffuse reflectance coefficients is investigated for the test scene. The results obtained are contrasted with an artistic approach'' in which a skilled artist manually sets all reflectance characteristics to obtain a visually pleasant appearance that corresponds to the existing environment. %J Computers & Graphics %V 25 %N 3 %& 511 %P 511 - 518 %@ false
Ershov, S., Kolchin, K., and Myszkowski, K. 2001. Rendering Pearlescent Appearance Based on Paint-Composition Modeling. The European Association for Computer Graphics 22th Annual Conference: EUROGRAPHICS 2001, Blackwell.
Export
BibTeX
@inproceedings{Myszkowski2000c, TITLE = {Rendering Pearlescent Appearance Based on Paint-Composition Modeling}, AUTHOR = {Ershov, Sergey and Kolchin, Konstantin and Myszkowski, Karol}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-9BFB2B24DB8E17BBC1256A7D004F1487-Myszkowski2000c}, PUBLISHER = {Blackwell}, YEAR = {2003}, DATE = {2001}, BOOKTITLE = {The European Association for Computer Graphics 22th Annual Conference: EUROGRAPHICS 2001}, EDITOR = {Chalmers, Alan and Rhyne, Theresa-Marie}, PAGES = {227--238}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Ershov, Sergey %A Kolchin, Konstantin %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Rendering Pearlescent Appearance Based on Paint-Composition Modeling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-32C8-2 %F EDOC: 520204 %F OTHER: Local-ID: C125675300671F7B-9BFB2B24DB8E17BBC1256A7D004F1487-Myszkowski2000c %I Blackwell %D 2001 %B Untitled Event %Z date of event: 2003-01-01 - 2003-01-01 %C Manchester, UK %B The European Association for Computer Graphics 22th Annual Conference: EUROGRAPHICS 2001 %E Chalmers, Alan; Rhyne, Theresa-Marie %P 227 - 238 %I Blackwell %B Computer Graphics Forum
Gortler, S. and Myszkowski, K., eds. 2001. Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering. Springer.
Export
BibTeX
@proceedings{Myszkowski2000egwr, TITLE = {Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering}, EDITOR = {Gortler, Steven and Myszkowski, Karol}, LANGUAGE = {eng}, ISBN = {3-211-83709-4}, LOCALID = {Local-ID: C125675300671F7B-E3F9C0E3D792F582C1256A7D004CBE1F-Myszkowski2000egwr}, PUBLISHER = {Springer}, YEAR = {2001}, DATE = {2001}, PAGES = {1-347}, }
Endnote
%0 Conference Proceedings %E Gortler, Steven %E Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-32CA-D %F EDOC: 520201 %@ 3-211-83709-4 %F OTHER: Local-ID: C125675300671F7B-E3F9C0E3D792F582C1256A7D004CBE1F-Myszkowski2000egwr %I Springer %D 2001 %B Untitled Event %Z date of event: 2001 - %D 2001 %C University College London %P 1-347
Haber, J., Myszkowski, K., Yamauchi, H., and Seidel, H.-P. 2001. Perceptually Guided Corrective Splatting. The European Association for Computer Graphics 22th Annual Conference: EUROGRAPHICS 2001, Blackwell.
Abstract
One of the basic difficulties with interactive walkthroughs is the high quality rendering of object surfaces with non-diffuse light scattering characteristics. Since full ray tracing at interactive rates is usually impossible, we render a precomputed global illumination solution using graphics hardware and use remaining computational power to correct the appearance of non-diffuse objects on-the-fly. The question arises, how to obtain the best image quality as perceived by a human observer within a limited amount of time for each frame. We address this problem by enforcing corrective computation for those non-diffuse objects that are selected using a computational model of visual attention. We consider both the saliency- and task-driven selection of those objects and benefit from the fact that shading artifacts of unattended'' objects are likely to remain unnoticed. We use a hierarchical image-space sampling scheme to control ray tracing and splat the generated point samples. The resulting image converges progressively to a ray traced solution if the viewing parameters remain unchanged. Moreover, we use a sample cache to enhance visual appearance if the time budget for correction has been too low for some frame. We check the validity of the cached samples using a novel criterion suited for non-diffuse surfaces and reproject valid samples into the current view.
Export
BibTeX
@inproceedings{Haber:2001:PGCS, TITLE = {Perceptually Guided Corrective Splatting}, AUTHOR = {Haber, J{\"o}rg and Myszkowski, Karol and Yamauchi, Hitoshi and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISSN = {0167-7055}, LOCALID = {Local-ID: C125675300671F7B-3992DB8541113439C1256A72003B9C5A-Haber:2001:PGCS}, PUBLISHER = {Blackwell}, YEAR = {2003}, DATE = {2001}, ABSTRACT = {One of the basic difficulties with interactive walkthroughs is the high quality rendering of object surfaces with non-diffuse light scattering characteristics. Since full ray tracing at interactive rates is usually impossible, we render a precomputed global illumination solution using graphics hardware and use remaining computational power to correct the appearance of non-diffuse objects on-the-fly. The question arises, how to obtain the best image quality as perceived by a human observer within a limited amount of time for each frame. We address this problem by enforcing corrective computation for those non-diffuse objects that are selected using a computational model of visual attention. We consider both the saliency- and task-driven selection of those objects and benefit from the fact that shading artifacts of unattended'' objects are likely to remain unnoticed. We use a hierarchical image-space sampling scheme to control ray tracing and splat the generated point samples. The resulting image converges progressively to a ray traced solution if the viewing parameters remain unchanged. Moreover, we use a sample cache to enhance visual appearance if the time budget for correction has been too low for some frame. We check the validity of the cached samples using a novel criterion suited for non-diffuse surfaces and reproject valid samples into the current view.}, BOOKTITLE = {The European Association for Computer Graphics 22th Annual Conference: EUROGRAPHICS 2001}, EDITOR = {Chalmers, Alan and Rhyne, Theresa-Marie}, PAGES = {C142--C152}, SERIES = {Computer Graphics Forum}, }
Endnote
%0 Conference Proceedings %A Haber, J&#246;rg %A Myszkowski, Karol %A Yamauchi, Hitoshi %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually Guided Corrective Splatting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-32C0-1 %F EDOC: 520198 %F OTHER: Local-ID: C125675300671F7B-3992DB8541113439C1256A72003B9C5A-Haber:2001:PGCS %I Blackwell %D 2001 %B Untitled Event %Z date of event: 2003-01-01 - 2003-01-01 %C Manchester, UK %X One of the basic difficulties with interactive walkthroughs is the high quality rendering of object surfaces with non-diffuse light scattering characteristics. Since full ray tracing at interactive rates is usually impossible, we render a precomputed global illumination solution using graphics hardware and use remaining computational power to correct the appearance of non-diffuse objects on-the-fly. The question arises, how to obtain the best image quality as perceived by a human observer within a limited amount of time for each frame. We address this problem by enforcing corrective computation for those non-diffuse objects that are selected using a computational model of visual attention. We consider both the saliency- and task-driven selection of those objects and benefit from the fact that shading artifacts of unattended'' objects are likely to remain unnoticed. We use a hierarchical image-space sampling scheme to control ray tracing and splat the generated point samples. The resulting image converges progressively to a ray traced solution if the viewing parameters remain unchanged. Moreover, we use a sample cache to enhance visual appearance if the time budget for correction has been too low for some frame. We check the validity of the cached samples using a novel criterion suited for non-diffuse surfaces and reproject valid samples into the current view. %B The European Association for Computer Graphics 22th Annual Conference: EUROGRAPHICS 2001 %E Chalmers, Alan; Rhyne, Theresa-Marie %P C142 - C152 %I Blackwell %B Computer Graphics Forum %@ false
Lensch, H.P.A., Kautz, J., Goesele, M., Heidrich, W., and Seidel, H.-P. 2001. Image-Based Reconstruction of Spatially Varying Materials. Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering, Springer.
Abstract
The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object.
Export
BibTeX
@inproceedings{Lensch:2001:IRS, TITLE = {Image-Based Reconstruction of Spatially Varying Materials}, AUTHOR = {Lensch, Hendrik P. A. and Kautz, Jan and Goesele, Michael and Heidrich, Wolfgang and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {3-211-83709-4}, LOCALID = {Local-ID: C125675300671F7B-249EA7C6EDD9BBF4C1256A7D0052B695-Lensch:2001:IRS}, PUBLISHER = {Springer}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object.}, BOOKTITLE = {Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering}, EDITOR = {Gortler, Steven and Myszkowski, Karol}, PAGES = {104--115}, }
Endnote
%0 Conference Proceedings %A Lensch, Hendrik P. A. %A Kautz, Jan %A Goesele, Michael %A Heidrich, Wolfgang %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Image-Based Reconstruction of Spatially Varying Materials : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-32A1-7 %F EDOC: 520207 %F OTHER: Local-ID: C125675300671F7B-249EA7C6EDD9BBF4C1256A7D0052B695-Lensch:2001:IRS %I Springer %D 2001 %B Untitled Event %Z date of event: 2001-06-25 - 2001-06-27 %C London, Great Britain %X The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object. %B Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering %E Gortler, Steven; Myszkowski, Karol %P 104 - 115 %I Springer %@ 3-211-83709-4
Myszkowski, K. 2001a. Chapter 6: Applications in Rendering and Animation. In: ACM Siggraph 2001, Course Notes: Seeing is Believing: Reality Perception in Modeling, Rendering and Animation. ACM Siggraph, New York, USA.
Export
BibTeX
@incollection{Myszkowski2001e, TITLE = {Chapter 6: Applications in Rendering and Animation}, AUTHOR = {Myszkowski, Karol}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-0AB78270DB481004C1256A7D00502668-Myszkowski2001e}, PUBLISHER = {ACM Siggraph}, ADDRESS = {New York, USA}, YEAR = {2001}, DATE = {2001}, BOOKTITLE = {ACM Siggraph 2001, Course Notes: Seeing is Believing: Reality Perception in Modeling, Rendering and Animation}, EDITOR = {McNamara, Ann and Chalmers, Alan}, PAGES = {1--52}, SERIES = {ACM Siggraph 2001, Course Notes}, VOLUME = {21}, }
Endnote
%0 Book Section %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Chapter 6: Applications in Rendering and Animation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-3271-6 %F EDOC: 520205 %F OTHER: Local-ID: C125675300671F7B-0AB78270DB481004C1256A7D00502668-Myszkowski2001e %I ACM Siggraph %C New York, USA %D 2001 %B ACM Siggraph 2001, Course Notes: Seeing is Believing: Reality Perception in Modeling, Rendering and Animation %E McNamara, Ann; Chalmers, Alan %P 1 - 52 %I ACM Siggraph %C New York, USA %S ACM Siggraph 2001, Course Notes %N 21
Myszkowski, K., Tawara, T., Akamine, H., and Seidel, H.-P. 2001. Perception-Guided Global Illumination Solution for Animation Rendering. Computer Graphics (SIGGRAPH-2001): Conference Proceedings, ACM.
Export
BibTeX
@inproceedings{Myszkowski2001b, TITLE = {Perception-Guided Global Illumination Solution for Animation Rendering}, AUTHOR = {Myszkowski, Karol and Tawara, Takehiro and Akamine, Hiroyuki and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {1-58113-292-1}, URL = {http://www.mpi-sb.mpg.de/resources/aqm/dynenv/paper/sg2001-myszkowski.pdf}, LOCALID = {Local-ID: C125675300671F7B-1D7B8F7EA05FC2F8C1256A7D004EABD7-Myszkowski2001b}, PUBLISHER = {ACM}, YEAR = {2008}, DATE = {2001}, BOOKTITLE = {Computer Graphics (SIGGRAPH-2001): Conference Proceedings}, EDITOR = {Fiume, Eugene}, PAGES = {221--230}, SERIES = {Annual Conference Series}, }
Endnote
%0 Conference Proceedings %A Myszkowski, Karol %A Tawara, Takehiro %A Akamine, Hiroyuki %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perception-Guided Global Illumination Solution for Animation Rendering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-32BD-B %F EDOC: 520203 %U http://www.mpi-sb.mpg.de/resources/aqm/dynenv/paper/sg2001-myszkowski.pdf %F OTHER: Local-ID: C125675300671F7B-1D7B8F7EA05FC2F8C1256A7D004EABD7-Myszkowski2001b %I ACM %D 2001 %B Untitled Event %Z date of event: 2008-01-02 - 2008-01-02 %C Los Angeles, USA %B Computer Graphics (SIGGRAPH-2001): Conference Proceedings %E Fiume, Eugene %P 221 - 230 %I ACM %@ 1-58113-292-1 %B Annual Conference Series
Myszkowski, K. 2001b. Applications of the Visual Differences Predictor in Global Illumination Computation. The Journal of three Dimensional Images 15, 4.
Abstract
We investigate applications of the Visible Difference Predictor (VDP) to steer global illumination computation. We use the VDP to monitor the progression of computation as a function of time for major global illumination algorithms. Based on the results obtained, we propose a novel global illumination algorithm which is a hybrid of stochastic (density estimation) and deterministic (adaptive mesh refinement) techniques used in an optimized sequence to reduce the differences between the intermediate and final images as predicted by the VDP. Also, the VDP is applied to decide upon stopping conditions for global illumination simulation, when further continuation of computation does not contribute to perceivable changes in the quality of the resulting images.
Export
BibTeX
@article{Myszkowski2001Aizu, TITLE = {Applications of the Visual Differences Predictor in Global Illumination Computation}, AUTHOR = {Myszkowski, Karol}, LANGUAGE = {eng}, ISSN = {1342-2189}, LOCALID = {Local-ID: C125675300671F7B-24AAEBE8788AB04EC1256B3B00604EF9-Myszkowski2001Aizu}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {We investigate applications of the Visible Difference Predictor (VDP) to steer global illumination computation. We use the VDP to monitor the progression of computation as a function of time for major global illumination algorithms. Based on the results obtained, we propose a novel global illumination algorithm which is a hybrid of stochastic (density estimation) and deterministic (adaptive mesh refinement) techniques used in an optimized sequence to reduce the differences between the intermediate and final images as predicted by the VDP. Also, the VDP is applied to decide upon stopping conditions for global illumination simulation, when further continuation of computation does not contribute to perceivable changes in the quality of the resulting images.}, JOURNAL = {The Journal of three Dimensional Images}, VOLUME = {15}, NUMBER = {4}, PAGES = {57--64}, }
Endnote
%0 Journal Article %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Applications of the Visual Differences Predictor in Global Illumination Computation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-3265-2 %F EDOC: 520242 %F OTHER: Local-ID: C125675300671F7B-24AAEBE8788AB04EC1256B3B00604EF9-Myszkowski2001Aizu %D 2001 %* Review method: peer-reviewed %X We investigate applications of the Visible Difference Predictor (VDP) to steer global illumination computation. We use the VDP to monitor the progression of computation as a function of time for major global illumination algorithms. Based on the results obtained, we propose a novel global illumination algorithm which is a hybrid of stochastic (density estimation) and deterministic (adaptive mesh refinement) techniques used in an optimized sequence to reduce the differences between the intermediate and final images as predicted by the VDP. Also, the VDP is applied to decide upon stopping conditions for global illumination simulation, when further continuation of computation does not contribute to perceivable changes in the quality of the resulting images. %J The Journal of three Dimensional Images %V 15 %N 4 %& 57 %P 57 - 64 %@ false
Myszkowski, K. 2001c. Efficient and Predictive Realistic Image Synthesis. .
Abstract
Synthesis of realistic images which predict the appearance of the real world has many applications including architecture and interior design, illumination engineering, environmental assessment, special effects and film production, along with many others. Due to costly global illumination computation, which is required for the prediction of appearance, physically-based rendering still remains the domain of research laboratories, and is rarely used in industrial practice. The main goal of this work is to analyze problems and provide solutions towards making predictive rendering an efficient and practical tool. First, existing global illumination techniques are discussed, then efficient solutions which handle complex geometry, multiple light sources, and arbitrary light scattering characteristics are proposed. Since real-time lighting computation is not affordable for complex environments, techniques of lighting storage and real-time reconstruction using pre-calculated results are developed. Special attention is paid to the solutions which use perception-guided algorithms to improve their performance. This makes it possible to focus the computation on readily visible scene details, and to stop it when further improvement of the image quality cannot be perceived by the human observer. Also, by better use of perception-motivated physically-based partial solutions, meaningful images can be presented to the user at the early stages of computation. Since many algorithms make simplifying assumptions about the underlying physical model in order to achieve gains in rendering performance, a validation procedure for testing lighting simulation accuracy and image quality is proposed. To check the requirement of appearance predictability imposed on the developed algorithms, the rendered images are compared against the corresponding real-world views.
Export
BibTeX
@phdthesis{Myszkowski2001hab, TITLE = {Efficient and Predictive Realistic Image Synthesis}, AUTHOR = {Myszkowski, Karol}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-D5B808201D3BF620C1256A7D004363DC-Myszkowski2001hab}, SCHOOL = {Warsaw Institute of Technology}, ADDRESS = {Warsaw, Poland}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {Synthesis of realistic images which predict the appearance of the real world has many applications including architecture and interior design, illumination engineering, environmental assessment, special effects and film production, along with many others. Due to costly global illumination computation, which is required for the prediction of appearance, physically-based rendering still remains the domain of research laboratories, and is rarely used in industrial practice. The main goal of this work is to analyze problems and provide solutions towards making predictive rendering an efficient and practical tool. First, existing global illumination techniques are discussed, then efficient solutions which handle complex geometry, multiple light sources, and arbitrary light scattering characteristics are proposed. Since real-time lighting computation is not affordable for complex environments, techniques of lighting storage and real-time reconstruction using pre-calculated results are developed. Special attention is paid to the solutions which use perception-guided algorithms to improve their performance. This makes it possible to focus the computation on readily visible scene details, and to stop it when further improvement of the image quality cannot be perceived by the human observer. Also, by better use of perception-motivated physically-based partial solutions, meaningful images can be presented to the user at the early stages of computation. Since many algorithms make simplifying assumptions about the underlying physical model in order to achieve gains in rendering performance, a validation procedure for testing lighting simulation accuracy and image quality is proposed. To check the requirement of appearance predictability imposed on the developed algorithms, the rendered images are compared against the corresponding real-world views.}, TYPE = {Habilitation thesis}, }
Endnote
%0 Thesis %A Myszkowski, Karol %+ External Organizations %T Efficient and Predictive Realistic Image Synthesis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1482-7 %F EDOC: 520200 %F OTHER: Local-ID: C125675300671F7B-D5B808201D3BF620C1256A7D004363DC-Myszkowski2001hab %I Warsaw Institute of Technology %C Warsaw, Poland %D 2001 %V habilitation %9 habilitation %X Synthesis of realistic images which predict the appearance of the real world has many applications including architecture and interior design, illumination engineering, environmental assessment, special effects and film production, along with many others. Due to costly global illumination computation, which is required for the prediction of appearance, physically-based rendering still remains the domain of research laboratories, and is rarely used in industrial practice. The main goal of this work is to analyze problems and provide solutions towards making predictive rendering an efficient and practical tool. First, existing global illumination techniques are discussed, then efficient solutions which handle complex geometry, multiple light sources, and arbitrary light scattering characteristics are proposed. Since real-time lighting computation is not affordable for complex environments, techniques of lighting storage and real-time reconstruction using pre-calculated results are developed. Special attention is paid to the solutions which use perception-guided algorithms to improve their performance. This makes it possible to focus the computation on readily visible scene details, and to stop it when further improvement of the image quality cannot be perceived by the human observer. Also, by better use of perception-motivated physically-based partial solutions, meaningful images can be presented to the user at the early stages of computation. Since many algorithms make simplifying assumptions about the underlying physical model in order to achieve gains in rendering performance, a validation procedure for testing lighting simulation accuracy and image quality is proposed. To check the requirement of appearance predictability imposed on the developed algorithms, the rendered images are compared against the corresponding real-world views.
Scheel, A., Stamminger, M., and Seidel, H.-P. 2001. Thrifty Final Gather for Radiosity. Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering, Springer.
Export
BibTeX
@inproceedings{Scheel2001, TITLE = {Thrifty Final Gather for Radiosity}, AUTHOR = {Scheel, Annette and Stamminger, Marc and Seidel, Hans-Peter}, LANGUAGE = {eng}, ISBN = {3-211-83709-4}, LOCALID = {Local-ID: C125675300671F7B-B8DB9D578EB3CD31C1256A7D0059548F-Scheel2001}, PUBLISHER = {Springer}, YEAR = {2001}, DATE = {2001}, BOOKTITLE = {Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering}, EDITOR = {Gortler, Steven and Myszkowski, Karol}, PAGES = {1--12}, }
Endnote
%0 Conference Proceedings %A Scheel, Annette %A Stamminger, Marc %A Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Thrifty Final Gather for Radiosity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-32D3-8 %F EDOC: 520208 %F OTHER: Local-ID: C125675300671F7B-B8DB9D578EB3CD31C1256A7D0059548F-Scheel2001 %I Springer %D 2001 %B Untitled Event %Z date of event: 2001-06-25 - 2001-06-27 %C London %B Rendering Techniques 2001: Proceedings of the 12th Eurographics Workshop on Rendering %E Gortler, Steven; Myszkowski, Karol %P 1 - 12 %I Springer %@ 3-211-83709-4
2000
Myszkowski, K. 2000. Chapter 4: Perception-driven Global Illumination and Rendering Computation, and Chapter 6: Perception-driven rendering of high-quality walkthrough animations. In: Image quality metrics (Course 44). ACM SIGGRAPH, New York, USA.
Export
BibTeX
@incollection{Myszkowski2000d, TITLE = {Chapter 4: Perception-driven Global Illumination and Rendering Computation, and Chapter 6: Perception-driven rendering of high-quality walkthrough animations}, AUTHOR = {Myszkowski, Karol}, LANGUAGE = {eng}, ISBN = {1-58113-276-X}, LOCALID = {Local-ID: C125675300671F7B-C31DAAC1BAF847C2C1256A0000477A7E-Myszkowski2000d}, PUBLISHER = {ACM SIGGRAPH}, ADDRESS = {New York, USA}, YEAR = {2000}, DATE = {2000}, BOOKTITLE = {Image quality metrics (Course 44)}, EDITOR = {McNamara, Ann and Chalmers, Alan}, PAGES = {43--59{\textasciitilde}and{\textasciitilde}75-81}, SERIES = {ACM Siggraph Course Notes}, VOLUME = {44}, }
Endnote
%0 Book Section %A Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Chapter 4: Perception-driven Global Illumination and Rendering Computation, and Chapter 6: Perception-driven rendering of high-quality walkthrough animations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-3490-E %F EDOC: 520173 %F OTHER: Local-ID: C125675300671F7B-C31DAAC1BAF847C2C1256A0000477A7E-Myszkowski2000d %I ACM SIGGRAPH %C New York, USA %D 2000 %B Image quality metrics (Course 44) %E McNamara, Ann; Chalmers, Alan %P 43 - 59~and~75-81 %I ACM SIGGRAPH %C New York, USA %@ 1-58113-276-X %S ACM Siggraph Course Notes %N 44
Myszkowski, K. and Kunii, T.L. 2000. A Case Study Towards Validation of Global Illumination Algorithms: Progressive Hierarchical Radiosity with Clustering. The Visual Computer 16, 5.
Abstract
The paper consists of two main parts: presentation of an efficient global illumination algorithm and description of its extensive experimental validation. In the first part, a hybrid of cluster-based hierarchical and progressive radiosity techniques is proposed, which does not require storing links between interacting surfaces and clusters. The clustering does not rely on input geometry, but is performed on the basis of local position in the scene for a pre-meshed scene model. The locality of the resulting clusters improves the accuracy of form factor calculations, and increases the number of possible high-level energy transfers between clusters within an imposed error bound. Limited refinement of the hierarchy of light interactions is supported without compromising the quality of shading when intermediate images are produced immediately upon user request. In the second part, a multi-stage validation procedure is proposed and results obtained using the presented algorithm are discussed. At first, experimental validation of the algorithm against analytically-derived and measured real-world data is performed to check how calculation speed is traded for lighting simulation accuracy for various clustering and meshing scenarios. Then the algorithm performance and rendering quality is tested by a direct comparison of the virtual and real-world images of a complex environment.
Export
BibTeX
@article{Myszkowski2000b, TITLE = {A Case Study Towards Validation of Global Illumination Algorithms: Progressive Hierarchical Radiosity with Clustering}, AUTHOR = {Myszkowski, Karol and Kunii, Tosiyasu L.}, LANGUAGE = {eng}, ISSN = {0178-2789}, URL = {http://link.springer.de/link/service/journals/00371/bibs/0016005/00160271.htm}, LOCALID = {Local-ID: C125675300671F7B-B8F930C0A468A8C2C1256A000045EE94-Myszkowski2000b}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {The paper consists of two main parts: presentation of an efficient global illumination algorithm and description of its extensive experimental validation. In the first part, a hybrid of cluster-based hierarchical and progressive radiosity techniques is proposed, which does not require storing links between interacting surfaces and clusters. The clustering does not rely on input geometry, but is performed on the basis of local position in the scene for a pre-meshed scene model. The locality of the resulting clusters improves the accuracy of form factor calculations, and increases the number of possible high-level energy transfers between clusters within an imposed error bound. Limited refinement of the hierarchy of light interactions is supported without compromising the quality of shading when intermediate images are produced immediately upon user request. In the second part, a multi-stage validation procedure is proposed and results obtained using the presented algorithm are discussed. At first, experimental validation of the algorithm against analytically-derived and measured real-world data is performed to check how calculation speed is traded for lighting simulation accuracy for various clustering and meshing scenarios. Then the algorithm performance and rendering quality is tested by a direct comparison of the virtual and real-world images of a complex environment.}, JOURNAL = {The Visual Computer}, VOLUME = {16}, NUMBER = {5}, PAGES = {271--288}, }
Endnote
%0 Journal Article %A Myszkowski, Karol %A Kunii, Tosiyasu L. %+ Computer Graphics, MPI for Informatics, Max Planck Society %T A Case Study Towards Validation of Global Illumination Algorithms: Progressive Hierarchical Radiosity with Clustering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-3480-1 %F EDOC: 520172 %U http://link.springer.de/link/service/journals/00371/bibs/0016005/00160271.htm %F OTHER: Local-ID: C125675300671F7B-B8F930C0A468A8C2C1256A000045EE94-Myszkowski2000b %D 2000 %* Review method: peer-reviewed %X The paper consists of two main parts: presentation of an efficient global illumination algorithm and description of its extensive experimental validation. In the first part, a hybrid of cluster-based hierarchical and progressive radiosity techniques is proposed, which does not require storing links between interacting surfaces and clusters. The clustering does not rely on input geometry, but is performed on the basis of local position in the scene for a pre-meshed scene model. The locality of the resulting clusters improves the accuracy of form factor calculations, and increases the number of possible high-level energy transfers between clusters within an imposed error bound. Limited refinement of the hierarchy of light interactions is supported without compromising the quality of shading when intermediate images are produced immediately upon user request. In the second part, a multi-stage validation procedure is proposed and results obtained using the presented algorithm are discussed. At first, experimental validation of the algorithm against analytically-derived and measured real-world data is performed to check how calculation speed is traded for lighting simulation accuracy for various clustering and meshing scenarios. Then the algorithm performance and rendering quality is tested by a direct comparison of the virtual and real-world images of a complex environment. %J The Visual Computer %V 16 %N 5 %& 271 %P 271 - 288 %@ false
Myszkowski, K., Rokita, P., and Tawara, T. 2000. Perception-Based Fast Rendering and Antialiasing of Walkthrough Sequences. IEEE Transactions on Visualization and Computer Graphics 6, 4.
Abstract
In this paper, we consider accelerated rendering of high quality walkthrough animation sequences along predefined paths. To improve rendering performance we use a combination of: a hybrid ray tracing and Image-Based Rendering (IBR) technique, and a novel perception-based antialiasing technique. In our rendering solution we derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatiotemporal Animation Quality Metric (AQM) is used to automatically guide such a hybrid rendering. The Image Flow (IF) obtained as a by-product of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatiotemporal antialiasing, which utilizes the IF to perform a motion-compensated filtering. The filter parameters have been tuned using the AQM predictions of animation quality as perceived by the human observer. These parameters adapt locally to the visual pattern velocity.
Export
BibTeX
@article{Myszkowski2000a, TITLE = {Perception-Based Fast Rendering and Antialiasing of Walkthrough Sequences}, AUTHOR = {Myszkowski, Karol and Rokita, Przemyslaw and Tawara, Takehiro}, LANGUAGE = {eng}, ISSN = {1077-2626}, URL = {http://www.computer.org/tvcg/tg2000/v4toc.htm}, LOCALID = {Local-ID: C125675300671F7B-D26AC90942D3569BC1256A000039C125-Myszkowski2000a}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {In this paper, we consider accelerated rendering of high quality walkthrough animation sequences along predefined paths. To improve rendering performance we use a combination of: a hybrid ray tracing and Image-Based Rendering (IBR) technique, and a novel perception-based antialiasing technique. In our rendering solution we derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatiotemporal Animation Quality Metric (AQM) is used to automatically guide such a hybrid rendering. The Image Flow (IF) obtained as a by-product of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatiotemporal antialiasing, which utilizes the IF to perform a motion-compensated filtering. The filter parameters have been tuned using the AQM predictions of animation quality as perceived by the human observer. These parameters adapt locally to the visual pattern velocity.}, JOURNAL = {IEEE Transactions on Visualization and Computer Graphics}, VOLUME = {6}, NUMBER = {4}, PAGES = {360--379}, }
Endnote
%0 Journal Article %A Myszkowski, Karol %A Rokita, Przemyslaw %A Tawara, Takehiro %+ Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perception-Based Fast Rendering and Antialiasing of Walkthrough Sequences : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-34E7-E %F EDOC: 520170 %U http://www.computer.org/tvcg/tg2000/v4toc.htm %F OTHER: Local-ID: C125675300671F7B-D26AC90942D3569BC1256A000039C125-Myszkowski2000a %D 2000 %* Review method: peer-reviewed %X In this paper, we consider accelerated rendering of high quality walkthrough animation sequences along predefined paths. To improve rendering performance we use a combination of: a hybrid ray tracing and Image-Based Rendering (IBR) technique, and a novel perception-based antialiasing technique. In our rendering solution we derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatiotemporal Animation Quality Metric (AQM) is used to automatically guide such a hybrid rendering. The Image Flow (IF) obtained as a by-product of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatiotemporal antialiasing, which utilizes the IF to perform a motion-compensated filtering. The filter parameters have been tuned using the AQM predictions of animation quality as perceived by the human observer. These parameters adapt locally to the visual pattern velocity. %J IEEE Transactions on Visualization and Computer Graphics %V 6 %N 4 %& 360 %P 360 - 379 %@ false
Volevich, V., Myszkowski, K., Khodulev, A., and Kopylov, E. 2000. Using the Visual Differences Predictor to Improve Performance of Progressive Global Illumination Computations. ACM Transactions on Graphics 19, 2.
Abstract
A novel view-independent technique for progressive global illumination computations has been developed that uses prediction of visible differences to improve both efficiency and effectiveness of physically-sound lighting solutions. The technique is a mixture of stochastic (density estimation) and deterministic (adaptive mesh refinement) algorithms that are used in a sequence optimized to reduce the differences between the intermediate and final images as perceived by the human observer in the course of lighting computations. The quantitative measurements of visibility were obtained using the model of human vision captured in the Visible Differences Predictor (VDP) developed by Daly \cite{Daly93}. The VDP responses were used to support selection of the best component algorithms from a pool of global illumination solutions, and to enhance the selected algorithms for even better progressive refinement of the image quality. Also, the VDP was used to determine the optimal sequential order of component-algorithm execution, and to choose the points at which switch-over between algorithms should take place. As the VDP is computationally expensive, it was applied exclusively at the stage of design and tuning of the composite technique, and so perceptual considerations are embedded into the resulting solution, though no VDP calculations are performed during the lighting simulation. The proposed global illumination technique is also novel, providing at unprecedented speeds intermediate image solutions of high quality even for complex scenes. One advantage of the technique is that local estimates of global illumination are readily available at early stages of computations. This makes possible the development of more robust adaptive mesh subdivision, which is guided by local contrast information. Also, based on stochastically-derived estimates of the local illumination error, an efficient object space filtering is applied to substantially reduce the visible noise inherent in stochastic solutions.
Export
BibTeX
@article{Volevich2000, TITLE = {Using the Visual Differences Predictor to Improve Performance of Progressive Global Illumination Computations}, AUTHOR = {Volevich, Vladimir and Myszkowski, Karol and Khodulev, Andrei and Kopylov, Edward}, LANGUAGE = {eng}, ISSN = {0730-0301}, URL = {http://www.acm.org/pubs/citations/journals/tog/2000-19-2/p122-volevich/}, LOCALID = {Local-ID: C125675300671F7B-7CA22EAB9B616843C1256A00003EB0E0-Volevich2000}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {A novel view-independent technique for progressive global illumination computations has been developed that uses prediction of visible differences to improve both efficiency and effectiveness of physically-sound lighting solutions. The technique is a mixture of stochastic (density estimation) and deterministic (adaptive mesh refinement) algorithms that are used in a sequence optimized to reduce the differences between the intermediate and final images as perceived by the human observer in the course of lighting computations. The quantitative measurements of visibility were obtained using the model of human vision captured in the Visible Differences Predictor (VDP) developed by Daly \cite{Daly93}. The VDP responses were used to support selection of the best component algorithms from a pool of global illumination solutions, and to enhance the selected algorithms for even better progressive refinement of the image quality. Also, the VDP was used to determine the optimal sequential order of component-algorithm execution, and to choose the points at which switch-over between algorithms should take place. As the VDP is computationally expensive, it was applied exclusively at the stage of design and tuning of the composite technique, and so perceptual considerations are embedded into the resulting solution, though no VDP calculations are performed during the lighting simulation. The proposed global illumination technique is also novel, providing at unprecedented speeds intermediate image solutions of high quality even for complex scenes. One advantage of the technique is that local estimates of global illumination are readily available at early stages of computations. This makes possible the development of more robust adaptive mesh subdivision, which is guided by local contrast information. Also, based on stochastically-derived estimates of the local illumination error, an efficient object space filtering is applied to substantially reduce the visible noise inherent in stochastic solutions.}, JOURNAL = {ACM Transactions on Graphics}, VOLUME = {19}, NUMBER = {2}, PAGES = {122--161}, }
Endnote
%0 Journal Article %A Volevich, Vladimir %A Myszkowski, Karol %A Khodulev, Andrei %A Kopylov, Edward %+ Computer Graphics, MPI for Informatics, Max Planck Society %T Using the Visual Differences Predictor to Improve Performance of Progressive Global Illumination Computations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-3506-F %F EDOC: 520171 %U http://www.acm.org/pubs/citations/journals/tog/2000-19-2/p122-volevich/ %F OTHER: Local-ID: C125675300671F7B-7CA22EAB9B616843C1256A00003EB0E0-Volevich2000 %D 2000 %* Review method: peer-reviewed %X A novel view-independent technique for progressive global illumination computations has been developed that uses prediction of visible differences to improve both efficiency and effectiveness of physically-sound lighting solutions. The technique is a mixture of stochastic (density estimation) and deterministic (adaptive mesh refinement) algorithms that are used in a sequence optimized to reduce the differences between the intermediate and final images as perceived by the human observer in the course of lighting computations. The quantitative measurements of visibility were obtained using the model of human vision captured in the Visible Differences Predictor (VDP) developed by Daly \cite{Daly93}. The VDP responses were used to support selection of the best component algorithms from a pool of global illumination solutions, and to enhance the selected algorithms for even better progressive refinement of the image quality. Also, the VDP was used to determine the optimal sequential order of component-algorithm execution, and to choose the points at which switch-over between algorithms should take place. As the VDP is computationally expensive, it was applied exclusively at the stage of design and tuning of the composite technique, and so perceptual considerations are embedded into the resulting solution, though no VDP calculations are performed during the lighting simulation. The proposed global illumination technique is also novel, providing at unprecedented speeds intermediate image solutions of high quality even for complex scenes. One advantage of the technique is that local estimates of global illumination are readily available at early stages of computations. This makes possible the development of more robust adaptive mesh subdivision, which is guided by local contrast information. Also, based on stochastically-derived estimates of the local illumination error, an efficient object space filtering is applied to substantially reduce the visible noise inherent in stochastic solutions. %J ACM Transactions on Graphics %V 19 %N 2 %& 122 %P 122 - 161 %@ false