Current Year

PhD
[1]
P. Danilewski, “ManyDSL One Host for All Language Need,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can “abuse” sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) — all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.
Export
BibTeX
@phdthesis{Danilewskiphd17, TITLE = {Many{DSL} One Host for All Language Need}, AUTHOR = {Danilewski, Piotr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68840}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can {\textquotedblleft}abuse{\textquotedblright} sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) --- all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.}, }
Endnote
%0 Thesis %A Danilewski, Piotr %Y Slussalek, Philipp %A referee: Reinhard, Wilhelm %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T ManyDSL One Host for All Language Need : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-934E-8 %U urn:nbn:de:bsz:291-scidok-68840 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 257 p. %V phd %9 phd %X Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can “abuse” sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) — all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6884/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[2]
M. Dirnberger, K. Mehlhorn, M. Grube, and H.-G. Döbereiner, “Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms.
Export
BibTeX
@phdthesis{dirnbergerphd17, TITLE = {Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum}, AUTHOR = {Dirnberger, Michael and Mehlhorn, Kurt and Grube, Martin and D{\"o}bereiner, Hans-G{\"u}nther}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69424}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms.}, }
Endnote
%0 Thesis %A Dirnberger, Michael %A Mehlhorn, Kurt %A Grube, Martin %A Döbereiner, Hans-Günther %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-DE4F-0 %U urn:nbn:de:bsz:291-scidok-69424 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P XV, 193 p. %V phd %9 phd %X This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6942/
[3]
S. Dutta, “Efficient knowledge Management for Named Entities from Text,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.
Export
BibTeX
@phdthesis{duttaphd17, TITLE = {Efficient knowledge Management for Named Entities from Text}, AUTHOR = {Dutta, Sourav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67924}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.}, }
Endnote
%0 Thesis %A Dutta, Sourav %Y Weikum, Gerhard %A referee: Nejdl, Wolfgang %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient knowledge Management for Named Entities from Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-A793-E %U urn:nbn:de:bsz:291-scidok-67924 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P xv, 134 p. %V phd %9 phd %X The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6792/
[4]
S. Friedrichs, C. Lenzen, K. Mehlhorn, and M. Ghaffari, “Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding,” Unversität des Saarlandes, Saarbrücken, 2017.
Abstract
We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver.
Export
BibTeX
@phdthesis{Friedrichsphd2017, TITLE = {Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding}, AUTHOR = {Friedrichs, Stephan and Lenzen, Christoph and Mehlhorn, Kurt and Ghaffari, Mohsen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69660}, SCHOOL = {Unversit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver.}, }
Endnote
%0 Thesis %A Friedrichs, Stephan %A Lenzen, Christoph %A Mehlhorn, Kurt %A Ghaffari, Mohsen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-E9A7-B %U urn:nbn:de:bsz:291-scidok-69660 %I Unversität des Saarlandes %C Saarbrücken %D 2017 %P x, 226 p. %V phd %9 phd %X We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6966/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[5]
P. Garrido, “Learning to Un-Rank: Quantifying Search Exposure for Users in Online Communities,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{Garridophd17, TITLE = {Learning to Un-Rank: Quantifying Search Exposure for Users in Online Communities}, AUTHOR = {Garrido, Pablo}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69419}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Garrido, Pablo %Y Theobalt, Christian %A referee: Perez, Patrick %A referee: Pauly, Mark %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Learning to Un-Rank: Quantifying Search Exposure for Users in Online Communities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-D1BC-2 %U urn:nbn:de:bsz:291-scidok-69419 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 185 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6941/
[6]
Y. Gryaditskaya, “High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The “HDR mode” often encountered on such devices, relies on techniques called “exposure fusion” and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.
Export
BibTeX
@phdthesis{Gryphd17, TITLE = {High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing}, AUTHOR = {Gryaditskaya, Yulia}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69296}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The {\textquotedblleft}HDR mode{\textquotedblright} often encountered on such devices, relies on techniques called {\textquotedblleft}exposure fusion{\textquotedblright} and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.}, }
Endnote
%0 Thesis %A Gryaditskaya, Yulia %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-ABA6-3 %U urn:nbn:de:bsz:291-scidok-69296 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 88 p. %V phd %9 phd %X Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The “HDR mode” often encountered on such devices, relies on techniques called “exposure fusion” and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6929/
[7]
A. Grycner, “Constructing Lexicons of Relational Phrases,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.
Export
BibTeX
@phdthesis{Grynerphd17, TITLE = {Constructing Lexicons of Relational Phrases}, AUTHOR = {Grycner, Adam}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69101}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.}, }
Endnote
%0 Thesis %A Grycner, Adam %Y Weikum, Gerhard %A referee: Klakow, Dietrich %A referee: Ponzetto, Simone Paolo %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Constructing Lexicons of Relational Phrases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-933B-1 %U urn:nbn:de:bsz:291-scidok-69101 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 125 p. %V phd %9 phd %X Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6910/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[8]
S. Gurajada, “Distributed Querying of Large Labeled Graphs,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the “Labeled Graph”, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. • Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. • Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. • Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.
Export
BibTeX
@phdthesis{guraphd2017, TITLE = {Distributed Querying of Large Labeled Graphs}, AUTHOR = {Gurajada, Sairam}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67738}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the {\textquotedblleft}Labeled Graph{\textquotedblright}, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. \mbox{$\bullet$} Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. \mbox{$\bullet$} Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. \mbox{$\bullet$} Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined {\textquotedblleft}TriAD{\textquotedblright} (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.}, }
Endnote
%0 Thesis %A Gurajada, Sairam %Y Theobald, Martin %A referee: Weikum, Gerhard %A referee: Özsu, M. Tamer %A referee: Michel, Sebastian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Distributed Querying of Large Labeled Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8202-E %U urn:nbn:de:bsz:291-scidok-67738 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P x, 167 p. %V phd %9 phd %X Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the “Labeled Graph”, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. • Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. • Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. • Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6773/
[9]
J. Hosang, “Analysis and Improvement of the Visual Object Detection Pipeline,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression.
Export
BibTeX
@phdthesis{Hosangphd17, TITLE = {Analysis and Improvement of the Visual Object Detection Pipeline}, AUTHOR = {Hosang, Jan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69080}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression.}, }
Endnote
%0 Thesis %A Hosang, Jan %Y Schiele, Bernt %A referee: Ferrari, Vittorio %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Analysis and Improvement of the Visual Object Detection Pipeline : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8CC9-B %U urn:nbn:de:bsz:291-scidok-69080 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 205 p. %V phd %9 phd %X Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6908/
[10]
J. Kalojanov, “R-symmetry for Triangle Meshes: Detection and Applications,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.
Export
BibTeX
@phdthesis{Kalojanovphd2017, TITLE = {R-symmetry for Triangle Meshes: Detection and Applications}, AUTHOR = {Kalojanov, Javor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.}, }
Endnote
%0 Thesis %A Kalojanov, Javor %Y Slusallek, Philipp %A referee: Wand, Michael %A referee: Mitra, Niloy %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T R-symmetry for Triangle Meshes: Detection and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-96A3-B %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 94 p. %V phd %9 phd %X In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6787/
[11]
E. Kuzey, “Populating Knowledge bases with Temporal Information,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{KuzeyPhd2017, TITLE = {Populating Knowledge bases with Temporal Information}, AUTHOR = {Kuzey, Erdal}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Kuzey, Erdal %Y Weikum, Gerhard %A referee: de Rijke , Maarten %A referee: Suchanek, Fabian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Populating Knowledge bases with Temporal Information : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-EAE5-7 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P XIV, 143 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6811/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[12]
M. Lapin, “Image Classification with Limited Training Data and Class Ambiguity,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations.
Export
BibTeX
@phdthesis{Lapinphd17, TITLE = {Image Classification with Limited Training Data and Class Ambiguity}, AUTHOR = {Lapin, Maksim}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69098}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations.}, }
Endnote
%0 Thesis %A Lapin, Maksim %Y Schiele, Bernt %A referee: Hein, Matthias %A referee: Lampert, Christoph %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Image Classification with Limited Training Data and Class Ambiguity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-9345-9 %U urn:nbn:de:bsz:291-scidok-69098 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 227 p. %V phd %9 phd %X Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6909/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[13]
M. Malinowski, “Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first ‘question answering about real-world images’ dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question.
Export
BibTeX
@phdthesis{Malinowskiphd17, TITLE = {Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image}, AUTHOR = {Malinowski, Mateusz}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68978}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first {\textquoteleft}question answering about real-world images{\textquoteright} dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question.}, }
Endnote
%0 Thesis %A Malinowski, Mateusz %Y Fritz, Mario %A referee: Pinkal, Manfred %A referee: Darrell, Trevor %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-9339-5 %U urn:nbn:de:bsz:291-scidok-68978 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 276 p. %V phd %9 phd %X Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first ‘question answering about real-world images’ dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6897/
[14]
S. Mukherjee, “Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.
Export
BibTeX
@phdthesis{Mukherjeephd17, TITLE = {Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities}, AUTHOR = {Mukherjee, Subhabrata}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69269}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.}, }
Endnote
%0 Thesis %A Mukherjee, Subhabrata %Y Weikum, Gerhard %A referee: Han, Jiawei %A referee: Günnemann, Stephan %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-A648-0 %U urn:nbn:de:bsz:291-scidok-69269 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 166 p. %V phd %9 phd %X One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6926/
[15]
F. Müller, “Analyzing DNA Methylation Signatures of Cell Identity,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Although virtually all cells in an organism share the same genome, regulatory mechanisms give rise to hundreds of different, highly specialized cell types. Understanding these mechanisms has been in the limelight of epigenomic research. It is now evident that cellular identity is inscribed in the epigenome of each individual cell. Nonetheless, the precise mechanisms by which different epigenomic marks are involved in regulating gene expression are just beginning to be unraveled. Furthermore, epigenomic patterns are highly dynamic and subject to environmental influences. Any given cell type is defined by cell populations exhibiting epigenetic heterogeneity at different levels. Characterizing this heterogeneity is paramount in understanding the regulatory role of the epigenome. Different epigenomic marks can be profiled using high-throughput sequencing, and global initiatives have started to provide a comprehensive picture of the human epigenome by assaying a multitude of marks across a broad panel of cell types and conditions. In particular, DNA methylation has been extensively studied for its gene-regulatory role in health and disease. This thesis describes computational methods and pipelines for the analysis of DNA methylation data. It provides concepts for addressing bioinformatic challenges such as the processing of large, epigenome-wide datasets and integrating multiple levels of information in an interpretable manner. We developed RnBeads, an R package that facilitates comprehensive, interpretable analysis of large-scale DNA methylation datasets at the level of single CpGs or genomic regions of interest. With the epiRepeatR pipeline, we introduced additional tools for studying global patterns of epigenomic marks in transposons and other repetitive regions of the genome. Blood-cell differentiation represents a useful model for studying trajectories of cellular differentiation. We developed and applied bioinformatic methods to dissect the DNA methylation landscape of the hematopoietic system. Here, we provide a broad outline of cell-type-specific DNA methylation signatures and phenotypic diversity reflected in the epigenomes of human mature blood cells. We also describe the DNA methylation dynamics in the process of immune memory formation in T helper cells. Moreover, we portrayed epigenetic fingerprints of defined progenitor cell types and derived computational models that were capable of accurately inferring cell identity. We used these models in order to characterize heterogeneity in progenitor cell populations, to identify DNA methylation signatures of hematopoietic differentiation and to infer the epigenomic similarities of blood cell types. Finally, by interpreting DNA methylation patterns in leukemia and derived pluripotent cells, we started to discern how epigenomic patterns are altered in disease and explored how reprogramming of these patterns could potentially be used to restore a non-malignant state. In summary, this work showcases novel methods and computational tools for the identification and interpretation of epigenetic signatures of cell identity. It provides a detailed view on the epigenomic landscape spanned by DNA methylation patterns in hematopoietic cells that enhances our understanding of epigenetic regulation in cell differentiation and disease.
Export
BibTeX
@phdthesis{muellerphd17, TITLE = {Analyzing {DNA} Methylation Signatures of Cell Identity}, AUTHOR = {M{\"u}ller, Fabian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69432}, DOI = {10.17617/2.2474737}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Although virtually all cells in an organism share the same genome, regulatory mechanisms give rise to hundreds of different, highly specialized cell types. Understanding these mechanisms has been in the limelight of epigenomic research. It is now evident that cellular identity is inscribed in the epigenome of each individual cell. Nonetheless, the precise mechanisms by which different epigenomic marks are involved in regulating gene expression are just beginning to be unraveled. Furthermore, epigenomic patterns are highly dynamic and subject to environmental influences. Any given cell type is defined by cell populations exhibiting epigenetic heterogeneity at different levels. Characterizing this heterogeneity is paramount in understanding the regulatory role of the epigenome. Different epigenomic marks can be profiled using high-throughput sequencing, and global initiatives have started to provide a comprehensive picture of the human epigenome by assaying a multitude of marks across a broad panel of cell types and conditions. In particular, DNA methylation has been extensively studied for its gene-regulatory role in health and disease. This thesis describes computational methods and pipelines for the analysis of DNA methylation data. It provides concepts for addressing bioinformatic challenges such as the processing of large, epigenome-wide datasets and integrating multiple levels of information in an interpretable manner. We developed RnBeads, an R package that facilitates comprehensive, interpretable analysis of large-scale DNA methylation datasets at the level of single CpGs or genomic regions of interest. With the epiRepeatR pipeline, we introduced additional tools for studying global patterns of epigenomic marks in transposons and other repetitive regions of the genome. Blood-cell differentiation represents a useful model for studying trajectories of cellular differentiation. We developed and applied bioinformatic methods to dissect the DNA methylation landscape of the hematopoietic system. Here, we provide a broad outline of cell-type-specific DNA methylation signatures and phenotypic diversity reflected in the epigenomes of human mature blood cells. We also describe the DNA methylation dynamics in the process of immune memory formation in T helper cells. Moreover, we portrayed epigenetic fingerprints of defined progenitor cell types and derived computational models that were capable of accurately inferring cell identity. We used these models in order to characterize heterogeneity in progenitor cell populations, to identify DNA methylation signatures of hematopoietic differentiation and to infer the epigenomic similarities of blood cell types. Finally, by interpreting DNA methylation patterns in leukemia and derived pluripotent cells, we started to discern how epigenomic patterns are altered in disease and explored how reprogramming of these patterns could potentially be used to restore a non-malignant state. In summary, this work showcases novel methods and computational tools for the identification and interpretation of epigenetic signatures of cell identity. It provides a detailed view on the epigenomic landscape spanned by DNA methylation patterns in hematopoietic cells that enhances our understanding of epigenetic regulation in cell differentiation and disease.}, }
Endnote
%0 Thesis %A Müller, Fabian %Y Lengauer, Thomas %A referee: Bock, Christoph %A referee: Brors, Benedikt %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Analyzing DNA Methylation Signatures of Cell Identity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-D9AA-6 %U urn:nbn:de:bsz:291-scidok-69432 %R 10.17617/2.2474737 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 177 p. %V phd %9 phd %X Although virtually all cells in an organism share the same genome, regulatory mechanisms give rise to hundreds of different, highly specialized cell types. Understanding these mechanisms has been in the limelight of epigenomic research. It is now evident that cellular identity is inscribed in the epigenome of each individual cell. Nonetheless, the precise mechanisms by which different epigenomic marks are involved in regulating gene expression are just beginning to be unraveled. Furthermore, epigenomic patterns are highly dynamic and subject to environmental influences. Any given cell type is defined by cell populations exhibiting epigenetic heterogeneity at different levels. Characterizing this heterogeneity is paramount in understanding the regulatory role of the epigenome. Different epigenomic marks can be profiled using high-throughput sequencing, and global initiatives have started to provide a comprehensive picture of the human epigenome by assaying a multitude of marks across a broad panel of cell types and conditions. In particular, DNA methylation has been extensively studied for its gene-regulatory role in health and disease. This thesis describes computational methods and pipelines for the analysis of DNA methylation data. It provides concepts for addressing bioinformatic challenges such as the processing of large, epigenome-wide datasets and integrating multiple levels of information in an interpretable manner. We developed RnBeads, an R package that facilitates comprehensive, interpretable analysis of large-scale DNA methylation datasets at the level of single CpGs or genomic regions of interest. With the epiRepeatR pipeline, we introduced additional tools for studying global patterns of epigenomic marks in transposons and other repetitive regions of the genome. Blood-cell differentiation represents a useful model for studying trajectories of cellular differentiation. We developed and applied bioinformatic methods to dissect the DNA methylation landscape of the hematopoietic system. Here, we provide a broad outline of cell-type-specific DNA methylation signatures and phenotypic diversity reflected in the epigenomes of human mature blood cells. We also describe the DNA methylation dynamics in the process of immune memory formation in T helper cells. Moreover, we portrayed epigenetic fingerprints of defined progenitor cell types and derived computational models that were capable of accurately inferring cell identity. We used these models in order to characterize heterogeneity in progenitor cell populations, to identify DNA methylation signatures of hematopoietic differentiation and to infer the epigenomic similarities of blood cell types. Finally, by interpreting DNA methylation patterns in leukemia and derived pluripotent cells, we started to discern how epigenomic patterns are altered in disease and explored how reprogramming of these patterns could potentially be used to restore a non-malignant state. In summary, this work showcases novel methods and computational tools for the identification and interpretation of epigenetic signatures of cell identity. It provides a detailed view on the epigenomic landscape spanned by DNA methylation patterns in hematopoietic cells that enhances our understanding of epigenetic regulation in cell differentiation and disease. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6943/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[16]
A. Rohrbach, “Generation and Grounding of Natural Language Descriptions for Visual Data,” universität des Saarlandes, Saarbrücken, 2017.
Abstract
Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.
Export
BibTeX
@phdthesis{Rohrbachphd17, TITLE = {Generation and Grounding of Natural Language Descriptions for Visual Data}, AUTHOR = {Rohrbach, Anna}, LANGUAGE = {eng}, SCHOOL = {universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.}, }
Endnote
%0 Thesis %A Rohrbach, Anna %Y Schiele, Bernt %A referee: Demberg, Vera %A referee: Darrell, Trevor %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Generation and Grounding of Natural Language Descriptions for Visual Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-57D4-E %I universität des Saarlandes %C Saarbrücken %D 2017 %8 02.06.2017 %P X, 215 p. %V phd %9 phd %X Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6874/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[17]
A. Siu, “Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{siuphd17, TITLE = {Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text}, AUTHOR = {Siu, Amy}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Siu, Amy %Y Weikum, Gerhard %A referee: Berberich, Klaus %A referee: Leser, Ulf %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-DD18-E %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 169 p. %V phd %9 phd
[18]
P. Sun, “Bi-(N-) cluster editing and its biomedical applications,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.
Export
BibTeX
@phdthesis{Sunphd17, TITLE = {Bi-(N-) cluster editing and its biomedical applications}, AUTHOR = {Sun, Peng}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69309}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.}, }
Endnote
%0 Thesis %A Sun, Peng %Y Baumbach, Jan %A referee: Guo, Jiong %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Bi-(N-) cluster editing and its biomedical applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-A65E-F %U urn:nbn:de:bsz:291-scidok-69309 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 192 p. %V phd %9 phd %X he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6930/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[19]
C. H. Tang, “Logics for Rule-based Configuration Systems,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis. This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule- based configuration systems.
Export
BibTeX
@phdthesis{Tangphd2017, TITLE = {Logics for Rule-based Configuration Systems}, AUTHOR = {Tang, Ching Hoo}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69639}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis. This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule- based configuration systems.}, }
Endnote
%0 Thesis %A Tang, Ching Hoo %Y Weidenbach, Christoph %A referee: Herzig, Andreas %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Logics for Rule-based Configuration Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-0871-7 %U urn:nbn:de:bsz:291-scidok-69639 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P X, 123 p. %V phd %9 phd %X Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis. This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule- based configuration systems. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6963/
[20]
D. Wand, “Superposition: Types and Induction,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs. This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results.
Export
BibTeX
@phdthesis{wandphd2017, TITLE = {Superposition: Types and Induction}, AUTHOR = {Wand, Daniel}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69522}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs. This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results.}, }
Endnote
%0 Thesis %A Wand, Daniel %Y Weidenbach, Christoph %A referee: Blanchette, Jasmin Christian %A referee: Sutcliffe, Geoff %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Superposition: Types and Induction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-E99C-5 %U urn:nbn:de:bsz:291-scidok-69522 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P x, 167 p. %V phd %9 phd %X Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs. This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6952/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[21]
M. Weigel, “Interactive On-Skin Devices for Expressive Touch-based Interactions,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.
Export
BibTeX
@phdthesis{Weigelphd17, TITLE = {Interactive On-Skin Devices for Expressive Touch-based Interactions}, AUTHOR = {Weigel, Martin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68857}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.}, }
Endnote
%0 Thesis %A Weigel, Martin %Y Steimle, Jürgen %A referee: Olwal, Alex %A referee: Krüger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Interactive On-Skin Devices for Expressive Touch-based Interactions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-904F-D %U urn:nbn:de:bsz:291-scidok-68857 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 153 p. %V phd %9 phd %X Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6885/
[22]
X. Wu, “Structure-aware Content Creation,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.
Export
BibTeX
@phdthesis{wuphd2017, TITLE = {Structure-aware Content Creation}, AUTHOR = {Wu, Xiaokun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67750}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.}, }
Endnote
%0 Thesis %A Wu, Xiaokun %Y Seidel, Hans-Peter %A referee: Wand, Michael %A referee: Hildebrandt, Klaus %A referee: Klein, Reinhard %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Structure-aware Content Creation : Detection, Retargeting and Deformation %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8072-6 %U urn:nbn:de:bsz:291-scidok-67750 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P viii, 61 p. %V phd %9 phd %X Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6775/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de