Publications of the International Max Planck Research School for Computer Science

PhD Thesis

2017
[1]
S. Dutta, “Efficient knowledge Management for Named Entities from Text,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.
Export
BibTeX
@phdthesis{duttaphd17, TITLE = {Efficient knowledge Management for Named Entities from Text}, AUTHOR = {Dutta, Sourav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67924}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.}, }
Endnote
%0 Thesis %A Dutta, Sourav %Y Weikum, Gerhard %A referee: Nejdl, Wolfgang %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient knowledge Management for Named Entities from Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-A793-E %U urn:nbn:de:bsz:291-scidok-67924 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P xv, 134 p. %V phd %9 phd %X The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6792/
[2]
S. Gurajada, “Distributed Querying of Large Labeled Graphs,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the “Labeled Graph”, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. • Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. • Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. • Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.
Export
BibTeX
@phdthesis{guraphd2017, TITLE = {Distributed Querying of Large Labeled Graphs}, AUTHOR = {Gurajada, Sairam}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67738}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the {\textquotedblleft}Labeled Graph{\textquotedblright}, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. \mbox{$\bullet$} Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. \mbox{$\bullet$} Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. \mbox{$\bullet$} Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined {\textquotedblleft}TriAD{\textquotedblright} (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.}, }
Endnote
%0 Thesis %A Gurajada, Sairam %Y Theobald, Martin %A referee: Weikum, Gerhard %A referee: Özsu, M. Tamer %A referee: Michel, Sebastian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Distributed Querying of Large Labeled Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8202-E %U urn:nbn:de:bsz:291-scidok-67738 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P x, 167 p. %V phd %9 phd %X Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the “Labeled Graph”, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. • Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. • Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. • Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6773/
[3]
J. Kalojanov, “R-symmetry for Triangle Meshes: Detection and Applications,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.
Export
BibTeX
@phdthesis{Kalojanovphd2017, TITLE = {R-symmetry for Triangle Meshes: Detection and Applications}, AUTHOR = {Kalojanov, Javor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.}, }
Endnote
%0 Thesis %A Kalojanov, Javor %Y Slusallek, Philipp %A referee: Wand, Michael %A referee: Mitra, Niloy %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T R-symmetry for Triangle Meshes: Detection and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-96A3-B %I Universität des Saarlandes %C Saarbrücken %D 2017 %P 94 p. %V phd %9 phd %X In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6787/
[4]
X. Wu, “Structure-aware Content Creation,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.
Export
BibTeX
@phdthesis{wuphd2017, TITLE = {Structure-aware Content Creation}, AUTHOR = {Wu, Xiaokun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67750}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.}, }
Endnote
%0 Thesis %A Wu, Xiaokun %Y Seidel, Hans-Peter %A referee: Wand, Michael %A referee: Hildebrandt, Klaus %A referee: Klein, Reinhard %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Structure-aware Content Creation : Detection, Retargeting and Deformation %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8072-6 %U urn:nbn:de:bsz:291-scidok-67750 %I Universität des Saarlandes %C Saarbrücken %D 2017 %P viii, 61 p. %V phd %9 phd %X Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6775/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2016
[5]
N. Azmy, “A Machine-checked Proof of Correctness of Pastry,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols
Export
BibTeX
@phdthesis{Azmyphd16, TITLE = {A Machine-checked Proof of Correctness of Pastry}, AUTHOR = {Azmy, Noran}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67309}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols}, }
Endnote
%0 Thesis %A Azmy, Noran %Y Weidenbach, Christoph %A referee: Merz, Stephan %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T A Machine-checked Proof of Correctness of Pastry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-3BAD-9 %U urn:nbn:de:bsz:291-scidok-67309 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P ix, 119 p. %V phd %9 phd %X A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6730/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[6]
M. Bachynskyi, “Biomechanical Models for Human-Computer Interaction,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.
Export
BibTeX
@phdthesis{Bachyphd16, TITLE = {Biomechanical Models for Human-Computer Interaction}, AUTHOR = {Bachynskyi, Myroslav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66888}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.}, }
Endnote
%0 Thesis %A Bachynskyi, Myroslav %Y Steimle, Jürgen %A referee: Schmidt, Albrecht %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Biomechanical Models for Human-Computer Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-0FD4-9 %U urn:nbn:de:bsz:291-scidok-66888 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xiv, 206 p. %V phd %9 phd %X Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6688/
[7]
W.-C. Chiu, “Bayesian Non-Parametrics for Multi-Modal Segmentation,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{walonPhDThesis2016, TITLE = {Bayesian Non-Parametrics for Multi-Modal Segmentation}, AUTHOR = {Chiu, Wei-Chen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66378}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Chiu, Wei-Chen %Y Fritz, Mario %A referee: Demberg, Vera %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Bayesian Non-Parametrics for Multi-Modal Segmentation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-788A-F %U urn:nbn:de:bsz:291-scidok-66378 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XII, 155 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6637/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[8]
L. Del Corro, “Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{delcorrophd15, TITLE = {Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text}, AUTHOR = {Del Corro, Luciano}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Del Corro, Luciano %Y Gemulla, Rainer %A referee: Ponzetto, Simone Paolo %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-B3DB-3 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xiv, 101 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6346/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[9]
N. T. Doncheva, “Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{DonchevaPhD2016, TITLE = {Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins}, AUTHOR = {Doncheva, Nadezhda Tsankova}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65957}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Doncheva, Nadezhda Tsankova %Y Albrecht, Mario %A referee: Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-1921-A %U urn:nbn:de:bsz:291-scidok-65957 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XII, 242 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6595/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[10]
O. Elek, “Efficient Methods for Physically-based Rendering of Participating Media,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{ElekPhD2016, TITLE = {Efficient Methods for Physically-based Rendering of Participating Media}, AUTHOR = {Elek, Oskar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65357}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Elek, Oskar %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %A referee: Dachsbacher, Karsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Efficient Methods for Physically-based Rendering of Participating Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F94D-E %U urn:nbn:de:bsz:291-scidok-65357 %I Universität des Saarlandes %C Saarbrücken %D 2016 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6535/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[11]
H. Hatefi Ardakani, “Finite Horizon Analysis of Markov Automata,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.
Export
BibTeX
@phdthesis{Hatefiphd17, TITLE = {Finite Horizon Analysis of {M}arkov Automata}, AUTHOR = {Hatefi Ardakani, Hassan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67438}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.}, }
Endnote
%0 Thesis %A Hatefi Ardakani, Hassan %Y Hermanns, Holger %A referee: Buchholz, Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Finite Horizon Analysis of Markov Automata : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-9E81-C %U urn:nbn:de:bsz:291-scidok-67438 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P X, 175 p. %V phd %9 phd %X Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6743/
[12]
A.-C. Hauschild, “Computational Methods for Breath Metabolomics in Clinical Diagnostics,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Hauschild_PhD2016, TITLE = {Computational Methods for Breath Metabolomics in Clinical Diagnostics}, AUTHOR = {Hauschild, Anne-Christin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65874}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Hauschild, Anne-Christin %Y Helms, Volkhard %A referee: Baumbach, Jan %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Computational Methods for Breath Metabolomics in Clinical Diagnostics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0C18-7 %U urn:nbn:de:bsz:291-scidok-65874 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 188 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6587/
[13]
P. Kellnhofer, “Perceptual Modeling for Stereoscopic 3D,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.
Export
BibTeX
@phdthesis{Kellnhoferphd2016, TITLE = {Perceptual Modeling for Stereoscopic {3D}}, AUTHOR = {Kellnhofer, Petr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66813}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.}, }
Endnote
%0 Thesis %A Kellnhofer, Petr %Y Myszkowski, Karol %A referee: Seidel, Hans-Peter %A referee: Masia, Belen %A referee: Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Perceptual Modeling for Stereoscopic 3D : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-BBA6-1 %U urn:nbn:de:bsz:291-scidok-66813 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xxiv, 158 p. %V phd %9 phd %X Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort. %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6681/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[14]
O. Klehm, “User-Guided Scene Stylization using Efficient Rendering Techniques,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Klehmphd2016, TITLE = {User-Guided Scene Stylization using Efficient Rendering Techniques}, AUTHOR = {Klehm, Oliver}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65321}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Klehm, Oliver %Y Seidel, Hans-Peter %A referee: Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T User-Guided Scene Stylization using Efficient Rendering Techniques : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-9C13-A %U urn:nbn:de:bsz:291-scidok-65321 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XIII, 111 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6532/
[15]
M. Košta, “New Concepts for Real Quantifier Elimination by Virtual Substitution,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Kostaphd16, TITLE = {New Concepts for Real Quantifier Elimination by Virtual Substitution}, AUTHOR = {Ko{\v s}ta, Marek}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Košta, Marek %Y Sturm, Thomas %A referee: Weber, Andreas %A referee: Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations Automation of Logic, MPI for Informatics, Max Planck Society %T New Concepts for Real Quantifier Elimination by Virtual Substitution : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-30A8-9 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xvi, 214 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6716/
[16]
M. Künnemann, “Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Kuennemannphd2016, TITLE = {Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting}, AUTHOR = {K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65991}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Künnemann, Marvin %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %A referee: Welzl, Emo %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-423A-3 %U urn:nbn:de:bsz:291-scidok-65991 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XI, 223 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6599/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[17]
S. Ott, “Algorithms for Classical and Modern Scheduling Problems,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Ott_PhD2016, TITLE = {Algorithms for Classical and Modern Scheduling Problems}, AUTHOR = {Ott, Sebastian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65763}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Ott, Sebastian %Y Mehlhorn, Kurt %A referee: Huang, Chien-Chung %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Algorithms for Classical and Modern Scheduling Problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0C1B-1 %U urn:nbn:de:bsz:291-scidok-65763 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P IX, 109 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6576/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[18]
A. Pironti, “Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Pirontiphd16, TITLE = {Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies}, AUTHOR = {Pironti, Alejandro}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67190}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Pironti, Alejandro %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-30D5-5 %U urn:nbn:de:bsz:291-scidok-67190 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P x, 272 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6719/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[19]
L. Pishchulin, “Articulated People Detection and Pose Estimation in Challenging Real World Environments,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{PishchulinPhD2016, TITLE = {Articulated People Detection and Pose Estimation in Challenging Real World Environments}, AUTHOR = {Pishchulin, Leonid}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65478}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Pishchulin, Leonid %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Articulated People Detection and Pose Estimation in Challenging Real World Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F000-B %U urn:nbn:de:bsz:291-scidok-65478 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XIII, 248 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6547/
[20]
S. S. Rangapuram, “Graph-based Methods for Unsupervised and Semi-supervised Data Analysis,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{rangphd17, TITLE = {Graph-based Methods for Unsupervised and Semi-supervised Data Analysis}, AUTHOR = {Rangapuram, Syama Sundar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66590}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Rangapuram, Syama Sundar %Y Hein, Matthias %A referee: Hoai An, Le Thi %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Graph-based Methods for Unsupervised and Semi-supervised Data Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-9EA4-D %U urn:nbn:de:bsz:291-scidok-66590 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XI, 161 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6659/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[21]
B. Reinert, “Interactive, Example-driven Synthesis and Manipulation of Visual Media,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Reinertbphd17, TITLE = {Interactive, Example-driven Synthesis and Manipulation of Visual Media}, AUTHOR = {Reinert, Bernhard}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67660}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Reinert, Bernhard %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive, Example-driven Synthesis and Manipulation of Visual Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5A03-B %U urn:nbn:de:bsz:291-scidok-67660 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XX, 116, XVII p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6766/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[22]
H. Rhodin, “From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{RhodinPhD2016, TITLE = {From Motion Capture to Interactive Virtual Worlds: {T}owards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animatio}, AUTHOR = {Rhodin, Helge}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67413}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Rhodin, Helge %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %A referee: Bregler, Christoph %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6310-C %U urn:nbn:de:bsz:291-scidok-67413 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 179 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6741/
[23]
S. Sridhar, “Tracking Hands in Action for Gesture-based Computer Input,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{SridharPhD2016, TITLE = {Tracking Hands in Action for Gesture-based Computer Input}, AUTHOR = {Sridhar, Srinath}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67712}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Sridhar, Srinath %Y Theobalt, Christian %A referee: Oulasvirta, Antti %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Tracking Hands in Action for Gesture-based Computer Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-631C-3 %U urn:nbn:de:bsz:291-scidok-67712 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XXIII, 161 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6771/
[24]
N. Tandon, “Commonsense Knowledge Acquisition and Applications,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{TandonPhD2016, TITLE = {Commonsense Knowledge Acquisition and Applications}, AUTHOR = {Tandon, Niket}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66291}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Tandon, Niket %Y Weikum, Gerhard %A referee: Lieberman, Henry %A referee: Vreeken, Jilles %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Commonsense Knowledge Acquisition and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-78F6-A %U urn:nbn:de:bsz:291-scidok-66291 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XIV, 154 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6629/
[25]
C. Teflioudi, “Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Teflioudiphd2016, TITLE = {Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search}, AUTHOR = {Teflioudi, Christina}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Teflioudi, Christina %Y Gemulla, Rainer %A referee: Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-43FA-2 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xi, 110 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6469/
[26]
K. Templin, “Depth, Shading, and Stylization in Stereoscopic Cinematograph,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Templinphd15, TITLE = {Depth, Shading, and Stylization in Stereoscopic Cinematograph}, AUTHOR = {Templin, Krzysztof}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-64390}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Templin, Krzysztof %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Depth, Shading, and Stylization in Stereoscopic Cinematograph : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-19FA-2 %U urn:nbn:de:bsz:291-scidok-64390 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P xii, 100 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6439/
[27]
B. Turoňová, “Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{TuronovaPhD2016, TITLE = {Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography}, AUTHOR = {Turo{\v n}ov{\'a}, Beata}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66400}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Turoňová, Beata %Y Slusallek, Philipp %A referee: Louis, Alfred K. %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-7898-F %U urn:nbn:de:bsz:291-scidok-66400 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P XI, 118 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6640/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[28]
M. Yahya, “Question Answering and Query Processing for Extended Knowledge Graphs,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{yahyaphd2016, TITLE = {Question Answering and Query Processing for Extended Knowledge Graphs}, AUTHOR = {Yahya, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Yahya, Mohamed %Y Weikum, Gerhard %A referee: Schütze, Hinrich %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Question Answering and Query Processing for Extended Knowledge Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-48C2-7 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P x, 160 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6476/
2015
[29]
M. AbdelMaksoud, “Processor Pipelines in WCET Analysis,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Abdelphd15, TITLE = {Processor Pipelines in {WCET} Analysis}, AUTHOR = {AbdelMaksoud, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A AbdelMaksoud, Mohamed %Y Wilhelm, Reinhard %A referee: Reineke, Jan %A referee: Falk, Heiko %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Processor Pipelines in WCET Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E5D-1 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 73 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6128/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[30]
F. Abed, “Coordinating Selfish Players in Scheduling Games,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{AbedPhd15, TITLE = {Coordinating Selfish Players in Scheduling Games}, AUTHOR = {Abed, Fidaa}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Abed, Fidaa %Y Mehlhorn, Kurt %A referee: Megow, Nicole %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Coordinating Selfish Players in Scheduling Games : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-4BBB-1 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 70 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6234/
[31]
A. Elhayek, “Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{ElhayekPhd15, TITLE = {Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups}, AUTHOR = {Elhayek, Ahmed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Elhayek, Ahmed %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A0-4 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P XIV, 124 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6325/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[32]
I. Georgiev, “Path Sampling Techniques for Efficient Light Transport Simulation,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Georgievphd15, TITLE = {Path Sampling Techniques for Efficient Light Transport Simulation}, AUTHOR = {Georgiev, Iliyan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Georgiev, Iliyan %Y Slussalek, Philipp %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Path Sampling Techniques for Efficient Light Transport Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E59-9 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 162 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6152/
[33]
W. Hagemann, “Symbolic Orthogonal Projections: A New Polyhedral Representation for Reachability Analysis of Hybrid Systems,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{HagemannPhd15, TITLE = {Symbolic Orthogonal Projections: A New Polyhedral Representation for Reachability Analysis of Hybrid Systems}, AUTHOR = {Hagemann, Willem}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Hagemann, Willem %Y Weidenbach, Christoph %A referee: Fränzle, Martin %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Symbolic Orthogonal Projections: A New Polyhedral Representation for Reachability Analysis of Hybrid Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-26AA-2 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P XIII, 94 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6304/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[34]
J. Hoffart, “Discovering and Disambiguating Named Entities in Text,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Hoffartthesis, TITLE = {Discovering and Disambiguating Named Entities in Text}, AUTHOR = {Hoffart, Johannes}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Hoffart, Johannes %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Discovering and Disambiguating Named Entities in Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-6C44-0 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P X, 103 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6022/
[35]
M. Lamotte-Schubert, “Automatic Authorization Analysis,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{LamottePhd15, TITLE = {Automatic Authorization Analysis}, AUTHOR = {Lamotte-Schubert, Manuel}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Lamotte-Schubert, Manuel %Y Weidenbach, Christoph %A referee: Baumgartner, Peter %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society %T Automatic Authorization Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FD0B-7 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 118 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6257/
[36]
A. Neumann, “On Efficiency and Reliability in Computer Science,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{NeumannPhd15, TITLE = {On Efficiency and Reliability in Computer Science}, AUTHOR = {Neumann, Adrian}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Neumann, Adrian %Y Mehlhorn, Kurt %A referee: Wiese, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Efficiency and Reliability in Computer Science : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FC6A-C %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 95 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6268/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[37]
C. Nguyen, “Data-driven Approaches for Interactive Appearance Editing,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{NguyenPhD2015, TITLE = {Data-driven Approaches for Interactive Appearance Editing}, AUTHOR = {Nguyen, Chuong}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-62372}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Nguyen, Chuong %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Data-driven Approaches for Interactive Appearance Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-9C47-9 %U urn:nbn:de:bsz:291-scidok-62372 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P XVII, 134 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6237/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[38]
S. Olberding, “Fabricating Custom-shaped Thin-film Interactive Surfaces,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{OlberdingPhD2015, TITLE = {Fabricating Custom-shaped Thin-film Interactive Surfaces}, AUTHOR = {Olberding, Simon}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-63285}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Olberding, Simon %Y Steimle, Jürgen %A referee: Krüger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Fabricating Custom-shaped Thin-film Interactive Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5EF8-2 %U urn:nbn:de:bsz:291-scidok-63285 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P XVI, 145 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6328/
[39]
B. Pepik, “Richer Object Representations for Object Class Detection in Challenging Real World Image,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Pepikphd15, TITLE = {Richer Object Representations for Object Class Detection in Challenging Real World Image}, AUTHOR = {Pepik, Bojan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Pepik, Bojan %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Richer Object Representations for Object Class Detection in Challenging Real World Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-7678-5 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P xii, 219 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6361/
[40]
A. Pourmiri, “Random Walk-based Algorithms on Networks,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Pourmiriphd15, TITLE = {Random Walk-based Algorithms on Networks}, AUTHOR = {Pourmiri, Ali}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Pourmiri, Ali %Y Mehlhorn, Kurt %A referee: Sauerwald, Thomas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Random Walk-based Algorithms on Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E73-D %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 112 S. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6186/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[41]
F. Ramezani, “Application of Multiplicative Weights Update Method in Algorithmic Game Theory,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{RamezaniPHD2015, TITLE = {Application of Multiplicative Weights Update Method in Algorithmic Game Theory}, AUTHOR = {Ramezani, Fahimeh}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Ramezani, Fahimeh %Y Mehlhorn, Kurt %A referee: Elbassioni, Khaled %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Application of Multiplicative Weights Update Method in Algorithmic Game Theory : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-4BB9-5 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 85 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6226/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[42]
C. Rizkallah, “Verification of Program Computations,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{RizkallahPhd15, TITLE = {Verification of Program Computations}, AUTHOR = {Rizkallah, Christine}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Rizkallah, Christine %Y Mehlhorn, Kurt %A referee: Nipkow, Tobias %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Verification of Program Computations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FD10-A %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 132 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6254/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[43]
S. Seufert, “Algorithmic Building Blocks for Relationship Analysis over Large Graphs,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Seufertphd15, TITLE = {Algorithmic Building Blocks for Relationship Analysis over Large Graphs}, AUTHOR = {Seufert, Stephan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Seufert, Stephan %Y Bedathur, Srikanta %A referee: Barbosa, Denilson %A referee: Weidenbach, Christoph %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Algorithmic Building Blocks for Relationship Analysis over Large Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E65-D %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 198 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6183/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[44]
M. Suda, “Resolution-based Methods for Linear Temporal Reasoning,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{SudaPhd15, TITLE = {Resolution-based Methods for Linear Temporal Reasoning}, AUTHOR = {Suda, Martin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Suda, Martin %Y Weidenbach, Christoph %A referee: Hoffmann, Jörg %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society %T Resolution-based Methods for Linear Temporal Reasoning : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FC90-3 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 233 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/6274/
[45]
T. Tylenda, “Methods and Tools for Summarization of Entities and Facts in Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{TylendaPhd15, TITLE = {Methods and Tools for Summarization of Entities and Facts in Knowledge Bases}, AUTHOR = {Tylenda, Tomasz}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Tylenda, Tomasz %Y Weikum, Gerhard %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Methods and Tools for Summarization of Entities and Facts in Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FC65-5 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 113 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6263/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[46]
Z. Wang, “Pattern Search for the Visualization of Scalar, Vector, and Line Fields,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{WangPhd15, TITLE = {Pattern Search for the Visualization of Scalar, Vector, and Line Fields}, AUTHOR = {Wang, Zhongjie}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Wang, Zhongjie %Y Seidel, Hans-Peter %A referee: Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Pattern Search for the Visualization of Scalar, Vector, and Line Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A5-9 %I Universität des Saarlandes %C Saarbrücken %D 2015 %P 103 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6330/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[47]
M. A. Yosef, “U-AIDA: A Customizable System for Named Entity Recognition, Classification, and Disambiguation,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Yosefphd15, TITLE = {U-{AIDA}: A Customizable System for Named Entity Recognition, Classification, and Disambiguation}, AUTHOR = {Yosef, Mohamed Amir}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Yosef, Mohamed Amir %Y Weikum, Gerhard %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T U-AIDA: A Customizable System for Named Entity Recognition, Classification, and Disambiguation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-B9B9-C %I Universität des Saarlandes %C Saarbrücken %D 2015 %P XV, 101 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6370/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2014
[48]
F. Alvanaki, “Mining Interesting Events on Large and Dynamic Data,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Alvanakithesis, TITLE = {Mining Interesting Events on Large and Dynamic Data}, AUTHOR = {Alvanaki, Foteini}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Alvanaki, Foteini %Y Michel, Sebastian %A referee: Weikum, Gerhard %A referee: Delis, Alexis %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Mining Interesting Events on Large and Dynamic Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-6C4E-B %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 128 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/5985/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[49]
Y. Assenov, “Identification and Prioritization of Genomic Loci with Disease-specific Methylation,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{AssenovPhD2014, TITLE = {Identification and Prioritization of Genomic Loci with Disease-specific Methylation}, AUTHOR = {Assenov, Yassen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58865}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Assenov, Yassen %Y Lengauer, Thomas %A referee: Bock, Christoph %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Identification and Prioritization of Genomic Loci with Disease-specific Methylation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-E49E-D %U urn:nbn:de:bsz:291-scidok-58865 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P IX, 142 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5886/
[50]
B. Beggel, “Determining and Utilizing the Quasispecies of the Hepatitis B Virus in Clinical Applications,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Beggeltheses2014, TITLE = {Determining and Utilizing the Quasispecies of the Hepatitis {B} Virus in Clinical Applications}, AUTHOR = {Beggel, Bastian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58317}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Beggel, Bastian %Y Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Determining and Utilizing the Quasispecies of the Hepatitis B Virus in Clinical Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5DFB-A %U urn:nbn:de:bsz:291-scidok-58317 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 138 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5831/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[51]
H. Blankenburg, “Computational Methods for Integrating and Analyzing Human Systems Biology Data,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Blankenburg2014, TITLE = {Computational Methods for Integrating and Analyzing Human Systems Biology Data}, AUTHOR = {Blankenburg, Hagen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59329}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Blankenburg, Hagen %Y Albrecht, Mario %A referee: Helms, Volkhard %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Computational Methods for Integrating and Analyzing Human Systems Biology Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9671-3 %U urn:nbn:de:bsz:291-scidok-59329 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 181 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5932/
[52]
K. Bringmann, “Sampling from Discrete Distributions and Computing Fréchet Distances,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{BringmannPhD2014, TITLE = {Sampling from Discrete Distributions and Computing {F}r{\'e}chet Distances}, AUTHOR = {Bringmann, Karl}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Bringmann, Karl %Y Mehlhorn, Kurt %A referee: Steger, Angelika %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sampling from Discrete Distributions and Computing Fréchet Distances : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9ACC-0 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 174 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2015/5988/
[53]
M. Dietzen, “Modeling Protein Interactions in Protein Binding Sites and Oligomeric Protein Complexes,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{DietzenPhD2014, TITLE = {Modeling Protein Interactions in Protein Binding Sites and Oligomeric Protein Complexes}, AUTHOR = {Dietzen, Matthias}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59402}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Dietzen, Matthias %Y Lengauer, Thomas %A referee: Hildebrandt, Andreas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Modeling Protein Interactions in Protein Binding Sites and Oligomeric Protein Complexes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-E4A2-1 %U urn:nbn:de:bsz:291-scidok-59402 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P XVIII, 259 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5940/
[54]
R. Dimitrova, “Synthesis and Control of Infinite-state Systems with Partial Observability,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Complex computer systems play an important role in every part of everyday life and their correctness is often vital to human safety. In light of the recent advances in the area of formal methods and the increasing availability and maturity of tools and techniques, the use of verification techniques to show that a system satisfies a specified property is about to become an integral part of the development process. To minimize the development costs, formal methods must be applied as early as possible, before the entire system is fully developed, or even at the stage when only its specification is available. The goal of synthesis is to automatically construct an implementation guaranteed to fulfill the provided specification, and, if no implementation exists, to report that the given requirements cannot be realized. When synthesizing an individual component within a system and its external environment, the synthesis procedure must take into account the component�s interface and deliver implementations that comply with it. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component�s environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. This thesis presents novel synthesis methods that respect the given interface limitations of the synthesized system components and are applicable to infinite-state models. The studied computational model is that of infinite-state two-player games under incomplete information. The contributions are structured into three parts, corresponding to a classification of such games according to the interface between the synthesized component and its environment. In the first part, we obtain decidability results for a class of game structures where the player corresponding to the synthesized component has a given finite set of possible observations and a finite set of possible actions. A prominent type of systems for which the interface of a component naturally defines a finite set of observations are Lossy Channel Systems. We provide symbolic game solving and strategy synthesis algorithms for lossy channel games under incomplete information with safety and reachability winning conditions. Our second contribution is a counterexample-guided abstraction refinement scheme for solving infinite-state under incomplete information in which the actions available to the component are still finitely many, but no finite set of possible observations is given. This situation is common, for example, in the synthesis of mutex protocols or robot controllers. In this setting, the observations correspond to observation predicates, which are logical formulas, and their computation is an integral part of our synthesis procedure. The resulting game solving method is applicable to games that are out of the scope of other available techniques. Last we study systems in which, in addition to the possibly infinite set of observation predicates, the component can choose between infinitely many possible actions. Timed games under incomplete information are a fundamental class of games for which this is the case. We extend the abstraction-refinement procedure to develop the first systematic method for the synthesis of observation predicates for timed control. Automatically refining the set of candidate observations based on counterexamples demonstrates better potential than brute-force enumeration of observation sets, in particular for systems where fine granularity of the observations is necessary.
Export
BibTeX
@phdthesis{Dimitrova2014, TITLE = {Synthesis and Control of Infinite-state Systems with Partial Observability}, AUTHOR = {Dimitrova, Rayna}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Complex computer systems play an important role in every part of everyday life and their correctness is often vital to human safety. In light of the recent advances in the area of formal methods and the increasing availability and maturity of tools and techniques, the use of verification techniques to show that a system satisfies a specified property is about to become an integral part of the development process. To minimize the development costs, formal methods must be applied as early as possible, before the entire system is fully developed, or even at the stage when only its specification is available. The goal of synthesis is to automatically construct an implementation guaranteed to fulfill the provided specification, and, if no implementation exists, to report that the given requirements cannot be realized. When synthesizing an individual component within a system and its external environment, the synthesis procedure must take into account the component{\diamond}s interface and deliver implementations that comply with it. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component{\diamond}s environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. This thesis presents novel synthesis methods that respect the given interface limitations of the synthesized system components and are applicable to infinite-state models. The studied computational model is that of infinite-state two-player games under incomplete information. The contributions are structured into three parts, corresponding to a classification of such games according to the interface between the synthesized component and its environment. In the first part, we obtain decidability results for a class of game structures where the player corresponding to the synthesized component has a given finite set of possible observations and a finite set of possible actions. A prominent type of systems for which the interface of a component naturally defines a finite set of observations are Lossy Channel Systems. We provide symbolic game solving and strategy synthesis algorithms for lossy channel games under incomplete information with safety and reachability winning conditions. Our second contribution is a counterexample-guided abstraction refinement scheme for solving infinite-state under incomplete information in which the actions available to the component are still finitely many, but no finite set of possible observations is given. This situation is common, for example, in the synthesis of mutex protocols or robot controllers. In this setting, the observations correspond to observation predicates, which are logical formulas, and their computation is an integral part of our synthesis procedure. The resulting game solving method is applicable to games that are out of the scope of other available techniques. Last we study systems in which, in addition to the possibly infinite set of observation predicates, the component can choose between infinitely many possible actions. Timed games under incomplete information are a fundamental class of games for which this is the case. We extend the abstraction-refinement procedure to develop the first systematic method for the synthesis of observation predicates for timed control. Automatically refining the set of candidate observations based on counterexamples demonstrates better potential than brute-force enumeration of observation sets, in particular for systems where fine granularity of the observations is necessary.}, }
Endnote
%0 Thesis %A Dimitrova, Rayna %Y Finkbeiner, Bernd %A referee: Majumdar, Rupak %+ Group R. Majumdar, Max Planck Institute for Software Systems, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Group R. Majumdar, Max Planck Institute for Software Systems, Max Planck Society %T Synthesis and Control of Infinite-state Systems with Partial Observability : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-C94F-3 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 216 p. %V phd %9 phd %X Complex computer systems play an important role in every part of everyday life and their correctness is often vital to human safety. In light of the recent advances in the area of formal methods and the increasing availability and maturity of tools and techniques, the use of verification techniques to show that a system satisfies a specified property is about to become an integral part of the development process. To minimize the development costs, formal methods must be applied as early as possible, before the entire system is fully developed, or even at the stage when only its specification is available. The goal of synthesis is to automatically construct an implementation guaranteed to fulfill the provided specification, and, if no implementation exists, to report that the given requirements cannot be realized. When synthesizing an individual component within a system and its external environment, the synthesis procedure must take into account the component�s interface and deliver implementations that comply with it. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component�s environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. This thesis presents novel synthesis methods that respect the given interface limitations of the synthesized system components and are applicable to infinite-state models. The studied computational model is that of infinite-state two-player games under incomplete information. The contributions are structured into three parts, corresponding to a classification of such games according to the interface between the synthesized component and its environment. In the first part, we obtain decidability results for a class of game structures where the player corresponding to the synthesized component has a given finite set of possible observations and a finite set of possible actions. A prominent type of systems for which the interface of a component naturally defines a finite set of observations are Lossy Channel Systems. We provide symbolic game solving and strategy synthesis algorithms for lossy channel games under incomplete information with safety and reachability winning conditions. Our second contribution is a counterexample-guided abstraction refinement scheme for solving infinite-state under incomplete information in which the actions available to the component are still finitely many, but no finite set of possible observations is given. This situation is common, for example, in the synthesis of mutex protocols or robot controllers. In this setting, the observations correspond to observation predicates, which are logical formulas, and their computation is an integral part of our synthesis procedure. The resulting game solving method is applicable to games that are out of the scope of other available techniques. Last we study systems in which, in addition to the possibly infinite set of observation predicates, the component can choose between infinitely many possible actions. Timed games under incomplete information are a fundamental class of games for which this is the case. We extend the abstraction-refinement procedure to develop the first systematic method for the synthesis of observation predicates for timed control. Automatically refining the set of candidate observations based on counterexamples demonstrates better potential than brute-force enumeration of observation sets, in particular for systems where fine granularity of the observations is necessary. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5946/
[55]
M. Dylla, “Efficient Querying and Learning in Probabilistic and Temporal Databases,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Probabilistic databases store, query, and manage large amounts of uncertain information. This thesis advances the state-of-the-art in probabilistic databases in three different ways: 1. We present a closed and complete data model for temporal probabilistic databases and analyze its complexity. Queries are posed via temporal deduction rules which induce lineage formulas capturing both time and uncertainty. 2. We devise a methodology for computing the top-k most probable query answers. It is based on first-order lineage formulas representing sets of answer candidates. Theoretically derived probability bounds on these formulas enable pruning low-probability answers. 3. We introduce the problem of learning tuple probabilities which allows updating and cleaning of probabilistic databases. We study its complexity, characterize its solutions, cast it into an optimization problem, and devise an approximation algorithm based on stochastic gradient descent. All of the above contributions support consistency constraints and are evaluated experimentally.
Export
BibTeX
@phdthesis{DyllaPhDThesis2014, TITLE = {Efficient Querying and Learning in Probabilistic and Temporal Databases}, AUTHOR = {Dylla, Maximilian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58146}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Probabilistic databases store, query, and manage large amounts of uncertain information. This thesis advances the state-of-the-art in probabilistic databases in three different ways: 1. We present a closed and complete data model for temporal probabilistic databases and analyze its complexity. Queries are posed via temporal deduction rules which induce lineage formulas capturing both time and uncertainty. 2. We devise a methodology for computing the top-k most probable query answers. It is based on first-order lineage formulas representing sets of answer candidates. Theoretically derived probability bounds on these formulas enable pruning low-probability answers. 3. We introduce the problem of learning tuple probabilities which allows updating and cleaning of probabilistic databases. We study its complexity, characterize its solutions, cast it into an optimization problem, and devise an approximation algorithm based on stochastic gradient descent. All of the above contributions support consistency constraints and are evaluated experimentally.}, }
Endnote
%0 Thesis %A Dylla, Maximilian %Y Weikum, Gerhard %A referee: Theobald, Martin %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient Querying and Learning in Probabilistic and Temporal Databases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-3C44-E %U urn:nbn:de:bsz:291-scidok-58146 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P VIII, 169 p. %V phd %9 phd %X Probabilistic databases store, query, and manage large amounts of uncertain information. This thesis advances the state-of-the-art in probabilistic databases in three different ways: 1. We present a closed and complete data model for temporal probabilistic databases and analyze its complexity. Queries are posed via temporal deduction rules which induce lineage formulas capturing both time and uncertainty. 2. We devise a methodology for computing the top-k most probable query answers. It is based on first-order lineage formulas representing sets of answer candidates. Theoretically derived probability bounds on these formulas enable pruning low-probability answers. 3. We introduce the problem of learning tuple probabilities which allows updating and cleaning of probabilistic databases. We study its complexity, characterize its solutions, cast it into an optimization problem, and devise an approximation algorithm based on stochastic gradient descent. All of the above contributions support consistency constraints and are evaluated experimentally. %K Deduction Rules, Probabilistic Database, Temporal Database, Learning, Constraints, Top-k %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5814/
[56]
L. Feuerbach, “Evolutionary Epigenomics - Identifying Functional Genome Elements by Epigenetic Footprints in the DNA,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Feuerbach2014, TITLE = {Evolutionary Epigenomics -- Identifying Functional Genome Elements by Epigenetic Footprints in the {DNA}}, AUTHOR = {Feuerbach, Lars}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Feuerbach, Lars %Y Lengauer, Thomas %A referee: Jotun, Hein %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Evolutionary Epigenomics - Identifying Functional Genome Elements by Epigenetic Footprints in the DNA : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9676-A %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 205 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5888/
[57]
A. Fietzke, “Labelled Superposition,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Fietzke2014, TITLE = {Labelled Superposition}, AUTHOR = {Fietzke, Arnaud}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Fietzke, Arnaud %Y Weidenbach, Christoph %A referee: Hermanns, Holger %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Labelled Superposition : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96A6-D %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 176 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5825/
[58]
S. Gerling, “Plugging in Trust and Privacy : Three Systems to Improve Widely used Ecosystems,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Gerling2014, TITLE = {Plugging in Trust and Privacy : Three Systems to Improve Widely used Ecosystems}, AUTHOR = {Gerling, Sebastian}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Gerling, Sebastian %Y Backes, Michael %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Plugging in Trust and Privacy : Three Systems to Improve Widely used Ecosystems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-DD50-0 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 157 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5961/
[59]
J. Günther, “Ray Tracing of Dynamic Scenes,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{GuentherPhD2014, TITLE = {Ray Tracing of Dynamic Scenes}, AUTHOR = {G{\"u}nther, Johannes}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59295}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Günther, Johannes %Y Slusallek, Philipp %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Ray Tracing of Dynamic Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C0-5 %U urn:nbn:de:bsz:291-scidok-59295 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5929/
[60]
K. Halachev, “Exploratory Visualizations and Statistical Analysis of Large, Heterogeneous Epigenetic Datasets,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Halachev2014, TITLE = {Exploratory Visualizations and Statistical Analysis of Large, Heterogeneous Epigenetic Datasets}, AUTHOR = {Halachev, Konstantin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Halachev, Konstantin %Y Lengauer, Thomas %A referee: Bock, Christoph %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Exploratory Visualizations and Statistical Analysis of Large, Heterogeneous Epigenetic Datasets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96A8-9 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 163 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5911/
[61]
R. Ibragimov, “Exact and Heuristic Algorithms for Network Alignment using Graph Edit Distance Models,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Ibragimovphd14, TITLE = {Exact and Heuristic Algorithms for Network Alignment using Graph Edit Distance Models}, AUTHOR = {Ibragimov, Rashid}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Ibragimov, Rashid %Y Baumbach, Jan %A referee: Guo, Jiong %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Exact and Heuristic Algorithms for Network Alignment using Graph Edit Distance Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E4C-7 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 149 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/5999/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[62]
A. Jain, “Data-driven Methods for Interactive Visual Content Creation and Manipulation,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{PhDThesis:JainArjun, TITLE = {Data-driven Methods for Interactive Visual Content Creation and Manipulation}, AUTHOR = {Jain, Arjun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58210}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Jain, Arjun %Y Thormählen, Thorsten %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Data-driven Methods for Interactive Visual Content Creation and Manipulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EB82-2 %U urn:nbn:de:bsz:291-scidok-58210 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P XV, 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5821/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[63]
M. Khosla, “Multiple Choice Allocations with Small Maximum Loads,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
The idea of using multiple choices to improve allocation schemes is now well understood and is often illustrated by the following example. Suppose n balls are allocated to n bins with each ball choosing a bin independently and uniformly at random. The \emphmaximum load}, or the number of balls in the most loaded bin, will then be approximately \log n \over \log \log n with high probability. Suppose now the balls are allocated sequentially by placing a ball in the least loaded bin among the k≥ 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this scenario, the maximum load drops to {\log \log n \over \log k} +\Theta(1), with high probability, which is an exponential improvement over the previous case. In this thesis we investigate multiple choice allocations from a slightly different perspective. Instead of minimizing the maximum load, we fix the bin capacities and focus on maximizing the number of balls that can be allocated without overloading any bin. In the process that we consider we have m=\lfloor cn \rfloor balls and n bins. Each ball chooses k bins independently and uniformly at random. \emph{Is it possible to assign each ball to one of its choices such that the no bin receives more than ℓ balls?} For all k≥ 3 and ℓ≥ 2 we give a critical value, c_{k,ℓ}^*, such that when cc_{k,ℓ}^* this is not the case. In case such an allocation exists, \emph{how quickly can we find it?} Previous work on total allocation time for case k≥ 3 and ℓ=1 has analyzed a \emph{breadth first strategy} which is shown to be linear only in expectation. We give a simple and efficient algorithm which we also call \emph{local search allocation}(LSA) to find an allocation for all k≥ 3 and ℓ=1. Provided the number of balls are below (but arbitrarily close to) the theoretical achievable load threshold, we give a \emph{linear bound for the total allocation time that holds with high probability. We demonstrate, through simulations, an order of magnitude improvement for total and maximum allocation times when compared to the state of the art method. Our results find applications in many areas including hashing, load balancing, data management, orientability of random hypergraphs and maximum matchings in a special class of bipartite graphs.
Export
BibTeX
@phdthesis{Khosla2014, TITLE = {Multiple Choice Allocations with Small Maximum Loads}, AUTHOR = {Khosla, Megha}, LANGUAGE = {enc}, URL = {urn:nbn:de:bsz:291-scidok-56957}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {The idea of using multiple choices to improve allocation schemes is now well understood and is often illustrated by the following example. Suppose n balls are allocated to n bins with each ball choosing a bin independently and uniformly at random. The \emphmaximum load}, or the number of balls in the most loaded bin, will then be approximately \log n \over \log \log n with high probability. Suppose now the balls are allocated sequentially by placing a ball in the least loaded bin among the k$\geq$ 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this scenario, the maximum load drops to {\log \log n \over \log k} +\Theta(1), with high probability, which is an exponential improvement over the previous case. In this thesis we investigate multiple choice allocations from a slightly different perspective. Instead of minimizing the maximum load, we fix the bin capacities and focus on maximizing the number of balls that can be allocated without overloading any bin. In the process that we consider we have m=\lfloor cn \rfloor balls and n bins. Each ball chooses k bins independently and uniformly at random. \emph{Is it possible to assign each ball to one of its choices such that the no bin receives more than $\ell$ balls?} For all k$\geq$ 3 and $\ell$$\geq$ 2 we give a critical value, c_{k,$\ell$}^*, such that when cc_{k,$\ell$}^* this is not the case. In case such an allocation exists, \emph{how quickly can we find it?} Previous work on total allocation time for case k$\geq$ 3 and $\ell$=1 has analyzed a \emph{breadth first strategy} which is shown to be linear only in expectation. We give a simple and efficient algorithm which we also call \emph{local search allocation}(LSA) to find an allocation for all k$\geq$ 3 and $\ell$=1. Provided the number of balls are below (but arbitrarily close to) the theoretical achievable load threshold, we give a \emph{linear bound for the total allocation time that holds with high probability. We demonstrate, through simulations, an order of magnitude improvement for total and maximum allocation times when compared to the state of the art method. Our results find applications in many areas including hashing, load balancing, data management, orientability of random hypergraphs and maximum matchings in a special class of bipartite graphs.}, }
Endnote
%0 Thesis %A Khosla, Megha %Y Mehlhorn, Kurt %A referee: Panagiotou, Konstantinos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Multiple Choice Allocations with Small Maximum Loads : %G enc %U http://hdl.handle.net/11858/00-001M-0000-0019-836A-A %U urn:nbn:de:bsz:291-scidok-56957 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 63 p. %V phd %9 phd %X The idea of using multiple choices to improve allocation schemes is now well understood and is often illustrated by the following example. Suppose n balls are allocated to n bins with each ball choosing a bin independently and uniformly at random. The \emphmaximum load}, or the number of balls in the most loaded bin, will then be approximately \log n \over \log \log n with high probability. Suppose now the balls are allocated sequentially by placing a ball in the least loaded bin among the k≥ 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this scenario, the maximum load drops to {\log \log n \over \log k} +\Theta(1), with high probability, which is an exponential improvement over the previous case. In this thesis we investigate multiple choice allocations from a slightly different perspective. Instead of minimizing the maximum load, we fix the bin capacities and focus on maximizing the number of balls that can be allocated without overloading any bin. In the process that we consider we have m=\lfloor cn \rfloor balls and n bins. Each ball chooses k bins independently and uniformly at random. \emph{Is it possible to assign each ball to one of its choices such that the no bin receives more than ℓ balls?} For all k≥ 3 and ℓ≥ 2 we give a critical value, c_{k,ℓ}^*, such that when cc_{k,ℓ}^* this is not the case. In case such an allocation exists, \emph{how quickly can we find it?} Previous work on total allocation time for case k≥ 3 and ℓ=1 has analyzed a \emph{breadth first strategy} which is shown to be linear only in expectation. We give a simple and efficient algorithm which we also call \emph{local search allocation}(LSA) to find an allocation for all k≥ 3 and ℓ=1. Provided the number of balls are below (but arbitrarily close to) the theoretical achievable load threshold, we give a \emph{linear bound for the total allocation time that holds with high probability. We demonstrate, through simulations, an order of magnitude improvement for total and maximum allocation times when compared to the state of the art method. Our results find applications in many areas including hashing, load balancing, data management, orientability of random hypergraphs and maximum matchings in a special class of bipartite graphs. %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5695/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[64]
C. Klein, “Matrix Rounding, Evolutionary Algorithms, and Hole Detection,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{KleinChristianPhD2014, TITLE = {Matrix Rounding, Evolutionary Algorithms, and Hole Detection}, AUTHOR = {Klein, Christian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59164}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Klein, Christian %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Matrix Rounding, Evolutionary Algorithms, and Hole Detection : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-069A-F %U urn:nbn:de:bsz:291-scidok-59164 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P VI, 126 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5916/
[65]
S. K. Kondreddi, “Human Computing and Crowdsourcing Methods for Knowledge Acquisition,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Ambiguity, complexity, and diversity in natural language textual expressions are major hindrances to automated knowledge extraction. As a result state-of-the-art methods for extracting entities and relationships from unstructured data make incorrect extractions or produce noise. With the advent of human computing, computationally hard tasks have been addressed through human inputs. While text-based knowledge acquisition can benefit from this approach, humans alone cannot bear the burden of extracting knowledge from the vast textual resources that exist today. Even making payments for crowdsourced acquisition can quickly become prohibitively expensive. In this thesis we present principled methods that effectively garner human computing inputs for improving the extraction of knowledge-base facts from natural language texts. Our methods complement automatic extraction techniques with human computing to reap the benefits of both while overcoming each other�s limitations. We present the architecture and implementation of HIGGINS, a system that combines an information extraction (IE) engine with a human computing (HC) engine to produce high quality facts. The IE engine combines statistics derived from large Web corpora with semantic resources like WordNet and ConceptNet to construct a large dictionary of entity and relational phrases. It employs specifically designed statistical language models for phrase relatedness to come up with questions and relevant candidate answers that are presented to human workers. Through extensive experiments we establish the superiority of this approach in extracting relation-centric facts from text. In our experiments we extract facts about fictitious characters in narrative text, where the issues of diversity and complexity in expressing relations are far more pronounced. Finally, we also demonstrate how interesting human computing games can be designed for knowledge acquisition tasks.
Export
BibTeX
@phdthesis{Kondreddi2014b, TITLE = {Human Computing and Crowdsourcing Methods for Knowledge Acquisition}, AUTHOR = {Kondreddi, Sarath Kumar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-57948}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Ambiguity, complexity, and diversity in natural language textual expressions are major hindrances to automated knowledge extraction. As a result state-of-the-art methods for extracting entities and relationships from unstructured data make incorrect extractions or produce noise. With the advent of human computing, computationally hard tasks have been addressed through human inputs. While text-based knowledge acquisition can benefit from this approach, humans alone cannot bear the burden of extracting knowledge from the vast textual resources that exist today. Even making payments for crowdsourced acquisition can quickly become prohibitively expensive. In this thesis we present principled methods that effectively garner human computing inputs for improving the extraction of knowledge-base facts from natural language texts. Our methods complement automatic extraction techniques with human computing to reap the benefits of both while overcoming each other{\diamond}s limitations. We present the architecture and implementation of HIGGINS, a system that combines an information extraction (IE) engine with a human computing (HC) engine to produce high quality facts. The IE engine combines statistics derived from large Web corpora with semantic resources like WordNet and ConceptNet to construct a large dictionary of entity and relational phrases. It employs specifically designed statistical language models for phrase relatedness to come up with questions and relevant candidate answers that are presented to human workers. Through extensive experiments we establish the superiority of this approach in extracting relation-centric facts from text. In our experiments we extract facts about fictitious characters in narrative text, where the issues of diversity and complexity in expressing relations are far more pronounced. Finally, we also demonstrate how interesting human computing games can be designed for knowledge acquisition tasks.}, }
Endnote
%0 Thesis %A Kondreddi, Sarath Kumar %Y Triantafillou, Peter %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Human Computing and Crowdsourcing Methods for Knowledge Acquisition : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-3C3D-F %U urn:nbn:de:bsz:291-scidok-57948 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 116 p. %V phd %9 phd %X Ambiguity, complexity, and diversity in natural language textual expressions are major hindrances to automated knowledge extraction. As a result state-of-the-art methods for extracting entities and relationships from unstructured data make incorrect extractions or produce noise. With the advent of human computing, computationally hard tasks have been addressed through human inputs. While text-based knowledge acquisition can benefit from this approach, humans alone cannot bear the burden of extracting knowledge from the vast textual resources that exist today. Even making payments for crowdsourced acquisition can quickly become prohibitively expensive. In this thesis we present principled methods that effectively garner human computing inputs for improving the extraction of knowledge-base facts from natural language texts. Our methods complement automatic extraction techniques with human computing to reap the benefits of both while overcoming each other�s limitations. We present the architecture and implementation of HIGGINS, a system that combines an information extraction (IE) engine with a human computing (HC) engine to produce high quality facts. The IE engine combines statistics derived from large Web corpora with semantic resources like WordNet and ConceptNet to construct a large dictionary of entity and relational phrases. It employs specifically designed statistical language models for phrase relatedness to come up with questions and relevant candidate answers that are presented to human workers. Through extensive experiments we establish the superiority of this approach in extracting relation-centric facts from text. In our experiments we extract facts about fictitious characters in narrative text, where the issues of diversity and complexity in expressing relations are far more pronounced. Finally, we also demonstrate how interesting human computing games can be designed for knowledge acquisition tasks. %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5794/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[66]
C. Kurz, “Constrained Camera Motion Estimation and 3D Reconstruction,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{KurzPhD2014, TITLE = {Constrained Camera Motion Estimation and {3D} Reconstruction}, AUTHOR = {Kurz, Christian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59439}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Kurz, Christian %Y Seidel, Hans-Peter %A referee: Thormählen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Constrained Camera Motion Estimation and 3D Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C2-1 %U urn:nbn:de:bsz:291-scidok-59439 %I Universität des Saarlandes %C Saarbrücken %D 2014 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5943/
[67]
F. Makari Manshadi, “Scalable Optimization Algorithms for Recommender Systems,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{MakariManshadi2014, TITLE = {Scalable Optimization Algorithms for Recommender Systems}, AUTHOR = {Makari Manshadi, Faraz}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Makari Manshadi, Faraz %Y Gemulla, Rainer %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Scalable Optimization Algorithms for Recommender Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96AA-5 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 121 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5922/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[68]
S. Metzger, “User-centric Knowledge Extraction and Maintenance,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Metzger2014, TITLE = {User-centric Knowledge Extraction and Maintenance}, AUTHOR = {Metzger, Steffen}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Metzger, Steffen %Y Schenkel, Ralf %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T User-centric Knowledge Extraction and Maintenance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96AE-E %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 230 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5763/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[69]
I. Reshetouski, “Kaleidoscopic Imaging,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{ReshetouskiPhD2014, TITLE = {Kaleidoscopic Imaging}, AUTHOR = {Reshetouski, Ilya}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59308}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Reshetouski, Ilya %Y Seidel, Hans-Peter %A referee: Vetterli, Martin %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Kaleidoscopic Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C4-E %U urn:nbn:de:bsz:291-scidok-59308 %I Universität des Saarlandes %C Saarbrücken %D 2014 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5930/
[70]
M. Rohrbach, “Combining Visual Recognition and Computational Linguistics : Linguistic Knowledge for Visual Recognition and Natural Language Descriptions of Visual Content,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Rohrbach14, TITLE = {Combining Visual Recognition and Computational Linguistics : Linguistic Knowledge for Visual Recognition and Natural Language Descriptions of Visual Content}, AUTHOR = {Rohrbach, Marcus}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-57580}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Rohrbach, Marcus %Y Schiele, Bernt %A referee: Pinkal, Manfred %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Combining Visual Recognition and Computational Linguistics : Linguistic Knowledge for Visual Recognition and Natural Language Descriptions of Visual Content : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-850F-A %U urn:nbn:de:bsz:291-scidok-57580 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P X, 195 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5758/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[71]
R. Röttger, “Active Transitivity Clustering of Large-scale Biomedical Datasets,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Roettger2014, TITLE = {Active Transitivity Clustering of Large-scale Biomedical Datasets}, AUTHOR = {R{\"o}ttger, Richard}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Röttger, Richard %A referee: Lengauer, Thomas %Y Baumbach, Jan %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Active Transitivity Clustering of Large-scale Biomedical Datasets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96BE-A %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 215 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5809/
[72]
S. E. Schelhorn, “Going Viral : an Integrated View on Virological Data Analysis from Basic Research to Clinical Applications,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Schelhorn2014, TITLE = {Going Viral : an Integrated View on Virological Data Analysis from Basic Research to Clinical Applications}, AUTHOR = {Schelhorn, Sven Eric}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Schelhorn, Sven Eric %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Going Viral : an Integrated View on Virological Data Analysis from Basic Research to Clinical Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96C0-2 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P 323 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5724/
[73]
C. Wu, “Inverse Rendering for Scene Reconstruction in General Environments,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{WuPhD2014, TITLE = {Inverse Rendering for Scene Reconstruction in General Environments}, AUTHOR = {Wu, Chenglei}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58326}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Wu, Chenglei %A referee: Seidel, Hans-Peter %Y Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Inverse Rendering for Scene Reconstruction in General Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-34B7-6 %U urn:nbn:de:bsz:291-scidok-58326 %I Universität des Saarlandes %C Saarbrücken %D 2014 %P XVI, 184 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5832/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2013
[74]
A. Anand, “Indexing Methods for Web Archives,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large text repositories. Web archives are such continuously growing text collections which contain versions of documents spanning over long time periods. Web archives present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for tools which can efficiently access and search them. In this work, we are interested in indexing methods for supporting text-search workloads over web archives like time-travel queries and phrase queries. To this end we make the following contributions: Time-travel queries are keyword queries with a temporal predicate, e.g., mpii saarland @ [06/2009], which return versions of documents in the past. We introduce a novel index organization strategy, called index sharding, for efficiently supporting time-travel queries without incurring additional index-size blowup. We also propose index-maintenance approaches which scale to such continuously growing collections. We develop query-optimization techniques for time-travel queries called partition selection which maximizes recall at any given query-execution stage. We propose indexing methods to support phrase queries, e.g., to be or not to be that is the question. We index multi-word sequences and devise novel queryoptimization methods over the indexed sequences to efficiently answer phrase queries. We demonstrate the superior performance of our approaches over existing methods by extensive experimentation on real-world web archives.
Export
BibTeX
@phdthesis{Anand2013, TITLE = {Indexing Methods for Web Archives}, AUTHOR = {Anand, Avishek}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large text repositories. Web archives are such continuously growing text collections which contain versions of documents spanning over long time periods. Web archives present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for tools which can efficiently access and search them. In this work, we are interested in indexing methods for supporting text-search workloads over web archives like time-travel queries and phrase queries. To this end we make the following contributions: Time-travel queries are keyword queries with a temporal predicate, e.g., mpii saarland @ [06/2009], which return versions of documents in the past. We introduce a novel index organization strategy, called index sharding, for efficiently supporting time-travel queries without incurring additional index-size blowup. We also propose index-maintenance approaches which scale to such continuously growing collections. We develop query-optimization techniques for time-travel queries called partition selection which maximizes recall at any given query-execution stage. We propose indexing methods to support phrase queries, e.g., to be or not to be that is the question. We index multi-word sequences and devise novel queryoptimization methods over the indexed sequences to efficiently answer phrase queries. We demonstrate the superior performance of our approaches over existing methods by extensive experimentation on real-world web archives.}, }
Endnote
%0 Thesis %A Anand, Avishek %Y Berberich, Klaus %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Indexing Methods for Web Archives : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CB4B-0 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large text repositories. Web archives are such continuously growing text collections which contain versions of documents spanning over long time periods. Web archives present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for tools which can efficiently access and search them. In this work, we are interested in indexing methods for supporting text-search workloads over web archives like time-travel queries and phrase queries. To this end we make the following contributions: Time-travel queries are keyword queries with a temporal predicate, e.g., mpii saarland @ [06/2009], which return versions of documents in the past. We introduce a novel index organization strategy, called index sharding, for efficiently supporting time-travel queries without incurring additional index-size blowup. We also propose index-maintenance approaches which scale to such continuously growing collections. We develop query-optimization techniques for time-travel queries called partition selection which maximizes recall at any given query-execution stage. We propose indexing methods to support phrase queries, e.g., to be or not to be that is the question. We index multi-word sequences and devise novel queryoptimization methods over the indexed sequences to efficiently answer phrase queries. We demonstrate the superior performance of our approaches over existing methods by extensive experimentation on real-world web archives. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5531/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[75]
O. Ciobotaru, “Rational Cryptography: Novel Constructions, Automated Verification and Unified Definitions,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Rational cryptography has recently emerged as a very promising field of research by combining notions and techniques from cryptography and game theory, because it offers an alternative to the rather inexible traditional cryptographic model. In contrast to the classical view of cryptography where protocol participants are considered either honest or arbitrarily malicious, rational cryptography models participants as rational players that try to maximize their benefit and thus deviate from the protocol only if they gain an advantage by doing so. The main research goals for rational cryptography are the design of more ecient protocols when players adhere to a rational model, the design and implementation of automated proofs for rational security notions and the study of the intrinsic connections between game theoretic and cryptographic notions. In this thesis, we address all these issues. First we present the mathematical model and the design for a new rational file sharing protocol which we call RatFish. Next, we develop a general method for automated verification for rational cryptographic protocols and we show how to apply our technique in order to automatically derive the rational security property for RatFish. Finally, we study the intrinsic connections between game theory and cryptography by defining a new game theoretic notion, which we call game universal implementation, and by showing its equivalence with the notion of weak stand-alone security.
Export
BibTeX
@phdthesis{Ciobotaru2013, TITLE = {Rational Cryptography: Novel Constructions, Automated Verification and Unified Definitions}, AUTHOR = {Ciobotaru, Oana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Rational cryptography has recently emerged as a very promising field of research by combining notions and techniques from cryptography and game theory, because it offers an alternative to the rather inexible traditional cryptographic model. In contrast to the classical view of cryptography where protocol participants are considered either honest or arbitrarily malicious, rational cryptography models participants as rational players that try to maximize their benefit and thus deviate from the protocol only if they gain an advantage by doing so. The main research goals for rational cryptography are the design of more ecient protocols when players adhere to a rational model, the design and implementation of automated proofs for rational security notions and the study of the intrinsic connections between game theoretic and cryptographic notions. In this thesis, we address all these issues. First we present the mathematical model and the design for a new rational file sharing protocol which we call RatFish. Next, we develop a general method for automated verification for rational cryptographic protocols and we show how to apply our technique in order to automatically derive the rational security property for RatFish. Finally, we study the intrinsic connections between game theory and cryptography by defining a new game theoretic notion, which we call game universal implementation, and by showing its equivalence with the notion of weak stand-alone security.}, }
Endnote
%0 Thesis %A Ciobotaru, Oana %Y Backes, Michael %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Rational Cryptography: Novel Constructions, Automated Verification and Unified Definitions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CB58-1 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X Rational cryptography has recently emerged as a very promising field of research by combining notions and techniques from cryptography and game theory, because it offers an alternative to the rather inexible traditional cryptographic model. In contrast to the classical view of cryptography where protocol participants are considered either honest or arbitrarily malicious, rational cryptography models participants as rational players that try to maximize their benefit and thus deviate from the protocol only if they gain an advantage by doing so. The main research goals for rational cryptography are the design of more ecient protocols when players adhere to a rational model, the design and implementation of automated proofs for rational security notions and the study of the intrinsic connections between game theoretic and cryptographic notions. In this thesis, we address all these issues. First we present the mathematical model and the design for a new rational file sharing protocol which we call RatFish. Next, we develop a general method for automated verification for rational cryptographic protocols and we show how to apply our technique in order to automatically derive the rational security property for RatFish. Finally, we study the intrinsic connections between game theory and cryptography by defining a new game theoretic notion, which we call game universal implementation, and by showing its equivalence with the notion of weak stand-alone security. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5392/
[76]
M. A. Granados Velásquez, “Advanced Editing Methods for Image and Video Sequences,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos.
Export
BibTeX
@phdthesis{GranadosThesis2013, TITLE = {Advanced Editing Methods for Image and Video Sequences}, AUTHOR = {Granados Vel{\'a}squez, Miguel Andr{\'e}s}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55021}, LOCALID = {Local-ID: 2D353EDEDC2BDA47C1257BEA0053CCB8-GranadosThesis2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos.}, }
Endnote
%0 Thesis %A Granados Velásquez, Miguel Andrés %Y Seidel, Hans-Peter %A referee: Kautz, Jan %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Advanced Editing Methods for Image and Video Sequences : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D79-9 %U urn:nbn:de:bsz:291-scidok-55021 %F OTHER: Local-ID: 2D353EDEDC2BDA47C1257BEA0053CCB8-GranadosThesis2013 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5502/
[77]
T. Helten, “Processing and Tracking Human Motions Using Optical, Inertial, and Depth Sensors,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, \textite.\,g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information.
Export
BibTeX
@phdthesis{Helten2013_PhDThesis, TITLE = {Processing and Tracking Human Motions Using Optical, Inertial, and Depth Sensors}, AUTHOR = {Helten, Thomas}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-56126}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, \textite.\,g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information.}, }
Endnote
%0 Thesis %A Helten, Thomas %Y Müller, Meinard %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Processing and Tracking Human Motions Using Optical, Inertial, and Depth Sensors : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3984-9 %U urn:nbn:de:bsz:291-scidok-56126 %F OTHER: 70346CB0842571B1C1257C58003538EF-Helten2013_PhDThesis %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, \textite.\,g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5612/
[78]
T. Jurkiewicz, “Toward Better Computation Models for Modern Machines,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on \emphthe ultimate planar convex hull algorithm} of Kirkpatrick and Seidel, and it has been referred to as \emph{the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks.
Export
BibTeX
@phdthesis{Jurkiewicz2013, TITLE = {Toward Better Computation Models for Modern Machines}, AUTHOR = {Jurkiewicz, Tomasz}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55407}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on \emphthe ultimate planar convex hull algorithm} of Kirkpatrick and Seidel, and it has been referred to as \emph{the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks.}, }
Endnote
%0 Thesis %A Jurkiewicz, Tomasz %Y Mehlhorn, Kurt %A referee: Meyer, Ulrich %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Toward Better Computation Models for Modern Machines : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0018-4A9C-B %U urn:nbn:de:bsz:291-scidok-55407 %I Universität des Saarlandes %C Saarbrücken %D 2013 %P 94 p. %V phd %9 phd %X Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on \emphthe ultimate planar convex hull algorithm} of Kirkpatrick and Seidel, and it has been referred to as \emph{the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5540/
[79]
J. Kerber, “Of Assembling Small Sculptures and Disassembling Large Geometry,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
This thesis describes the research results and contributions that have been achieved during the author�s doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect. The first part covers the true-to-detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief.Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike andaesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements. The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size.
Export
BibTeX
@phdthesis{Kerber2013_2, TITLE = {Of Assembling Small Sculptures and Disassembling Large Geometry}, AUTHOR = {Kerber, Jens}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55160}, LOCALID = {Local-ID: 0B9352B7950A1459C1257BF60042B83E-Kerber2013_2}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {This thesis describes the research results and contributions that have been achieved during the author{\diamond}s doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect. The first part covers the true-to-detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief.Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike andaesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements. The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size.}, }
Endnote
%0 Thesis %A Kerber, Jens %Y Seidel, Hans-Peter %A referee: Belyaev, Alexander %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Of Assembling Small Sculptures and Disassembling Large Geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D35-1 %U urn:nbn:de:bsz:291-scidok-55160 %F OTHER: Local-ID: 0B9352B7950A1459C1257BF60042B83E-Kerber2013_2 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X This thesis describes the research results and contributions that have been achieved during the author�s doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect. The first part covers the true-to-detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief.Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike andaesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements. The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5516/
[80]
E. Kruglov, “Superposition Modulo Theory,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{KruglovDiss13, TITLE = {Superposition Modulo Theory}, AUTHOR = {Kruglov, Evgeny}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55597}, LOCALID = {Local-ID: F58B326B7199622DC1257C66003BEFFF-KruglovDiss13}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Kruglov, Evgeny %Y Althaus, Ernst %A referee: Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Superposition Modulo Theory : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-7A1C-5 %F OTHER: Local-ID: F58B326B7199622DC1257C66003BEFFF-KruglovDiss13 %U urn:nbn:de:bsz:291-scidok-55597 %I Universität des Saarlandes %C Saarbrücken %D 2013 %P X, 229 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5559/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[81]
T. Lu, “Formal Verification of the Pastry Protocol,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{LuDiss13, TITLE = {Formal Verification of the {Pastry} Protocol}, AUTHOR = {Lu, Tianxiang}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55878}, LOCALID = {Local-ID: 53D311D21A10BD89C1257C66003CDFCF-LuDiss13}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Lu, Tianxiang %Y Weidenbach, Christoph %A referee: Schmitt, Peter %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Formal Verification of the Pastry Protocol : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-7A22-6 %F OTHER: Local-ID: 53D311D21A10BD89C1257C66003CDFCF-LuDiss13 %U urn:nbn:de:bsz:291-scidok-55878 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5587/
[82]
K. R. Patil, “Genome Signature based Sequence Comparison for Taxonomic Assignment and Tree Inference,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{Patil2013, TITLE = {Genome Signature based Sequence Comparison for Taxonomic Assignment and Tree Inference}, AUTHOR = {Patil, Kaustubh Raosaheb}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-52973}, LOCALID = {Local-ID: 58D1B1989200E496C1257BFF002517BF-Patil2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013-01}, }
Endnote
%0 Thesis %A Patil, Kaustubh Raosaheb %Y Lengauer, Thomas %A referee: McHardy, Alice Carolyn %+ Computational Genomics and Epidemiology, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Genomics and Epidemiology, MPI for Informatics, Max Planck Society %T Genome Signature based Sequence Comparison for Taxonomic Assignment and Tree Inference : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-7BB7-0 %U urn:nbn:de:bsz:291-scidok-52973 %F OTHER: Local-ID: 58D1B1989200E496C1257BFF002517BF-Patil2013 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5297/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[83]
L. Qu, “Sentiment Analysis with Limited Training Data,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase-level predictors and sentence similarity measures. 3. LSSVMrae model: To understand the sentiments regarding entities, we proposed LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. These models explore also different feature representations to capture the compositional semantics inherent in sentiment-bearing expressions. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks.
Export
BibTeX
@phdthesis{Qu2013, TITLE = {Sentiment Analysis with Limited Training Data}, AUTHOR = {Qu, Lizhen}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase-level predictors and sentence similarity measures. 3. LSSVMrae model: To understand the sentiments regarding entities, we proposed LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. These models explore also different feature representations to capture the compositional semantics inherent in sentiment-bearing expressions. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks.}, }
Endnote
%0 Thesis %A Qu, Lizhen %Y Weikum, Gerhard %A referee: Gemulla, Rainer %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Sentiment Analysis with Limited Training Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9796-9 %I Universität des Saarlandes %C Saarbrücken %D 2013 %P 133 p. %V phd %9 phd %X Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase-level predictors and sentence similarity measures. 3. LSSVMrae model: To understand the sentiments regarding entities, we proposed LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. These models explore also different feature representations to capture the compositional semantics inherent in sentiment-bearing expressions. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5615/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[84]
K. Scherbaum, “Data Driven Analysis of Faces from Images,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{Scherbaum2013z, TITLE = {Data Driven Analysis of Faces from Images}, AUTHOR = {Scherbaum, Kristina}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55212}, LOCALID = {Local-ID: 263F0D6B29F5A1A8C1257C600050EA30-Scherbaum2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Scherbaum, Kristina %Y Seidel, Hans-Peter %A referee: Thormählen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Data Driven Analysis of Faces from Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1D08-3 %F OTHER: Local-ID: 263F0D6B29F5A1A8C1257C600050EA30-Scherbaum2013 %U urn:nbn:de:bsz:291-scidok-55212 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5521/
[85]
M. Shaheen, “Cache Based Optimization of Stencil Computations an Algorithmic Approach,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{PhDThesis2013:Shaheen_Mohammed, TITLE = {Cache Based Optimization of Stencil Computations an Algorithmic Approach}, AUTHOR = {Shaheen, Mohammed}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55494}, LOCALID = {Local-ID: 112EF87E6A67B9BEC1257C2E003399CB-PhDThesis2013:Shaheen_Mohammed}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Shaheen, Mohammed %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Cache Based Optimization of Stencil Computations an Algorithmic Approach : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3A2F-7 %U urn:nbn:de:bsz:291-scidok-55494 %F OTHER: Local-ID: 112EF87E6A67B9BEC1257C2E003399CB-PhDThesis2013:Shaheen_Mohammed %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5549/
[86]
A. Stupar, “Soundtrack Recommendation for Images,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
The drastic increase in production of multimedia content has emphasized the research concerning its organization and retrieval. In this thesis, we address the problem of music retrieval when a set of images is given as input query, i.e., the problem of soundtrack recommendation for images. The task at hand is to recommend appropriate music to be played during the presentation of a given set of query images. To tackle this problem, we formulate a hypothesis that the knowledge appropriate for the task is contained in publicly available contemporary movies. Our approach, Picasso, employs similarity search techniques inside the image and music domains, harvesting movies to form a link between the domains. To achieve a fair and unbiased comparison between different soundtrack recommendation approaches, we proposed an evaluation benchmark. The evaluation results are reported for Picasso and the baseline approach, using the proposed benchmark. We further address two efficiency aspects that arise from the Picasso approach. First, we investigate the problem of processing top-K queries with set-defined selections and propose an index structure that aims at minimizing the query answering latency. Second, we address the problem of similarity search in high-dimensional spaces and propose two enhancements to the Locality Sensitive Hashing (LSH) scheme. We also investigate the prospects of a distributed similarity search algorithm based on LSH using the MapReduce framework. Finally, we give an overview of the PicasSound|a smartphone application based on the Picasso approach.
Export
BibTeX
@phdthesis{Stupar2012, TITLE = {Soundtrack Recommendation for Images}, AUTHOR = {Stupar, Aleksandar}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {The drastic increase in production of multimedia content has emphasized the research concerning its organization and retrieval. In this thesis, we address the problem of music retrieval when a set of images is given as input query, i.e., the problem of soundtrack recommendation for images. The task at hand is to recommend appropriate music to be played during the presentation of a given set of query images. To tackle this problem, we formulate a hypothesis that the knowledge appropriate for the task is contained in publicly available contemporary movies. Our approach, Picasso, employs similarity search techniques inside the image and music domains, harvesting movies to form a link between the domains. To achieve a fair and unbiased comparison between different soundtrack recommendation approaches, we proposed an evaluation benchmark. The evaluation results are reported for Picasso and the baseline approach, using the proposed benchmark. We further address two efficiency aspects that arise from the Picasso approach. First, we investigate the problem of processing top-K queries with set-defined selections and propose an index structure that aims at minimizing the query answering latency. Second, we address the problem of similarity search in high-dimensional spaces and propose two enhancements to the Locality Sensitive Hashing (LSH) scheme. We also investigate the prospects of a distributed similarity search algorithm based on LSH using the MapReduce framework. Finally, we give an overview of the PicasSound|a smartphone application based on the Picasso approach.}, }
Endnote
%0 Thesis %A Stupar, Aleksandar %Y Michel, Sebastian %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Soundtrack Recommendation for Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9794-D %I Universität des Saarlandes %C Saarbrücken %D 2013 %P 149 p. %V phd %9 phd %X The drastic increase in production of multimedia content has emphasized the research concerning its organization and retrieval. In this thesis, we address the problem of music retrieval when a set of images is given as input query, i.e., the problem of soundtrack recommendation for images. The task at hand is to recommend appropriate music to be played during the presentation of a given set of query images. To tackle this problem, we formulate a hypothesis that the knowledge appropriate for the task is contained in publicly available contemporary movies. Our approach, Picasso, employs similarity search techniques inside the image and music domains, harvesting movies to form a link between the domains. To achieve a fair and unbiased comparison between different soundtrack recommendation approaches, we proposed an evaluation benchmark. The evaluation results are reported for Picasso and the baseline approach, using the proposed benchmark. We further address two efficiency aspects that arise from the Picasso approach. First, we investigate the problem of processing top-K queries with set-defined selections and propose an index structure that aims at minimizing the query answering latency. Second, we address the problem of similarity search in high-dimensional spaces and propose two enhancements to the Locality Sensitive Hashing (LSH) scheme. We also investigate the prospects of a distributed similarity search algorithm based on LSH using the MapReduce framework. Finally, we give an overview of the PicasSound|a smartphone application based on the Picasso approach. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5526/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[87]
M. Sunkel, “Statistical Part-based Models for Object Detection in Large 3D Scans,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications. This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times.
Export
BibTeX
@phdthesis{SunkelThesis2013, TITLE = {Statistical Part-based Models for Object Detection in Large {3D} Scans}, AUTHOR = {Sunkel, Martin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55128}, LOCALID = {Local-ID: D229974BF6B66B74C1257BF2004DF924-SunkelThesis2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013-09}, ABSTRACT = {3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications. This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times.}, }
Endnote
%0 Thesis %A Sunkel, Martin %Y Seidel, Hans-Peter %A referee: Wand, Michael %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Statistical Part-based Models for Object Detection in Large 3D Scans : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D3F-D %U urn:nbn:de:bsz:291-scidok-55128 %F OTHER: Local-ID: D229974BF6B66B74C1257BF2004DF924-SunkelThesis2013 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X 3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications. This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5512/
[88]
B. Taneva, “Automatic Population of Knowledge Bases with Multimodal Data about Named Entities,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Knowledge bases are of great importance for Web search, recommendations, and many Information Retrieval tasks. However, maintaining them for not so popular entities is often a bottleneck. Typically, such entities have limited textual coverage and only a few ontological facts. Moreover, these entities are not well populated with multimodal data, such as images, videos, or audio recordings. The goals in this thesis are (1) to populate a given knowledge base with multimodal data about entities, such as images or audio recordings, and (2) to ease the task of maintaining and expanding the textual knowledge about a given entity, by recommending valuable text excerpts to the contributors of knowledge bases. The thesis makes three main contributions. The first two contributions concentrate on finding images of named entities with high precision, high recall, and high visual diversity. Our main focus are less popular entities, for which the image search engines fail to retrieve good results. Our methods utilize background knowledge about the entity, such as ontological facts or a short description, and a visual-based image similarity to rank and diversify a set of candidate images. Our third contribution is an approach for extracting text contents related to a given entity. It leverages a language-model-based similarity between a short description of the entity and the text sources, and solves a budget-constraint optimization program without any assumptions on the text structure. Moreover, our approach is also able to reliably extract entity related audio excerpts from news podcasts. We derive the time boundaries from the usually very noisy audio transcriptions.
Export
BibTeX
@phdthesis{TanevaPhDThesis, TITLE = {Automatic Population of Knowledge Bases with Multimodal Data about Named Entities}, AUTHOR = {Taneva, Bilyana}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-54839}, LOCALID = {Local-ID: 28FC9CE2EBDB4763C1257BD40056934A-TanevaPhDThesis}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Knowledge bases are of great importance for Web search, recommendations, and many Information Retrieval tasks. However, maintaining them for not so popular entities is often a bottleneck. Typically, such entities have limited textual coverage and only a few ontological facts. Moreover, these entities are not well populated with multimodal data, such as images, videos, or audio recordings. The goals in this thesis are (1) to populate a given knowledge base with multimodal data about entities, such as images or audio recordings, and (2) to ease the task of maintaining and expanding the textual knowledge about a given entity, by recommending valuable text excerpts to the contributors of knowledge bases. The thesis makes three main contributions. The first two contributions concentrate on finding images of named entities with high precision, high recall, and high visual diversity. Our main focus are less popular entities, for which the image search engines fail to retrieve good results. Our methods utilize background knowledge about the entity, such as ontological facts or a short description, and a visual-based image similarity to rank and diversify a set of candidate images. Our third contribution is an approach for extracting text contents related to a given entity. It leverages a language-model-based similarity between a short description of the entity and the text sources, and solves a budget-constraint optimization program without any assumptions on the text structure. Moreover, our approach is also able to reliably extract entity related audio excerpts from news podcasts. We derive the time boundaries from the usually very noisy audio transcriptions.}, }
Endnote
%0 Thesis %A Taneva, Bilyana %Y Weikum, Gerhard %A referee: Suchanek, Fabian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Automatic Population of Knowledge Bases with Multimodal Data about Named Entities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-389C-E %U urn:nbn:de:bsz:291-scidok-54839 %F OTHER: Local-ID: 28FC9CE2EBDB4763C1257BD40056934A-TanevaPhDThesis %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X Knowledge bases are of great importance for Web search, recommendations, and many Information Retrieval tasks. However, maintaining them for not so popular entities is often a bottleneck. Typically, such entities have limited textual coverage and only a few ontological facts. Moreover, these entities are not well populated with multimodal data, such as images, videos, or audio recordings. The goals in this thesis are (1) to populate a given knowledge base with multimodal data about entities, such as images or audio recordings, and (2) to ease the task of maintaining and expanding the textual knowledge about a given entity, by recommending valuable text excerpts to the contributors of knowledge bases. The thesis makes three main contributions. The first two contributions concentrate on finding images of named entities with high precision, high recall, and high visual diversity. Our main focus are less popular entities, for which the image search engines fail to retrieve good results. Our methods utilize background knowledge about the entity, such as ontological facts or a short description, and a visual-based image similarity to rank and diversify a set of candidate images. Our third contribution is an approach for extracting text contents related to a given entity. It leverages a language-model-based similarity between a short description of the entity and the text sources, and solves a budget-constraint optimization program without any assumptions on the text structure. Moreover, our approach is also able to reliably extract entity related audio excerpts from news podcasts. We derive the time boundaries from the usually very noisy audio transcriptions. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5483/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[89]
Y. Wang, “Methods and Tools for Temporal Knowledge Harvesting,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
\chapterAbstract} To extend the traditional knowledge base with temporal dimension, this thesis offers methods and tools for harvesting temporal facts from both semi-structured and textual sources. Our contributions are briefly summarized as follows. \begin{enumerate} \item{\bf Timely YAGO:} A temporal knowledge base called Timely YAGO (T-YAGO) which extends YAGO with temporal attributes is built. We define a simple RDF-style data model to support temporal knowledge. \item{\bf PRAVDA:} To be able to harvest as many temporal facts from free-text as possible, we develop a system PRAVDA. It utilizes a graph-based semi-supervised learning algorithm to extract fact observations, which are further cleaned up by an Integer Linear Program based constraint solver. We also attempt to harvest spatio-temporal facts to track a person's trajectory. \item{\bf PRAVDA-live:} A user-centric interactive knowledge harvesting system, called PRAVDA-live, is developed for extracting facts from natural language free-text. It is built on the framework of PRAVDA. It supports fact extraction of user-defined relations from ad-hoc selected text documents and ready-to-use RDF exports. \item{\bf T-URDF:} We present a simple and efficient representation model for time-dependent uncertainty in combination with first-order inference rules and recursive queries over RDF-like knowledge bases. We adopt the common possible-worlds semantics known from probabilistic databases and extend it towards histogram-like confidence distributions that capture the validity of facts across time. \end{enumerate All of these components are fully implemented systems, which together form an integrative architecture. PRAVDA and PRAVDA-live aim at gathering new facts (particularly temporal facts), and then T-URDF reconciles them. Finally these facts are stored in a (temporal) knowledge base, called T-YAGO. A SPARQL-like time-aware querying language, together with a visualization tool, are designed for T-YAGO. Temporal knowledge can also be applied for document summarization.
Export
BibTeX
@phdthesis{Wang-thesis2013, TITLE = {Methods and Tools for Temporal Knowledge Harvesting}, AUTHOR = {Wang, Yafang}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-50967}, LOCALID = {Local-ID: 142737B17504ED10C1257B19006B30E4-Wang-thesis2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {\chapterAbstract} To extend the traditional knowledge base with temporal dimension, this thesis offers methods and tools for harvesting temporal facts from both semi-structured and textual sources. Our contributions are briefly summarized as follows. \begin{enumerate} \item{\bf Timely YAGO:} A temporal knowledge base called Timely YAGO (T-YAGO) which extends YAGO with temporal attributes is built. We define a simple RDF-style data model to support temporal knowledge. \item{\bf PRAVDA:} To be able to harvest as many temporal facts from free-text as possible, we develop a system PRAVDA. It utilizes a graph-based semi-supervised learning algorithm to extract fact observations, which are further cleaned up by an Integer Linear Program based constraint solver. We also attempt to harvest spatio-temporal facts to track a person's trajectory. \item{\bf PRAVDA-live:} A user-centric interactive knowledge harvesting system, called PRAVDA-live, is developed for extracting facts from natural language free-text. It is built on the framework of PRAVDA. It supports fact extraction of user-defined relations from ad-hoc selected text documents and ready-to-use RDF exports. \item{\bf T-URDF:} We present a simple and efficient representation model for time-dependent uncertainty in combination with first-order inference rules and recursive queries over RDF-like knowledge bases. We adopt the common possible-worlds semantics known from probabilistic databases and extend it towards histogram-like confidence distributions that capture the validity of facts across time. \end{enumerate All of these components are fully implemented systems, which together form an integrative architecture. PRAVDA and PRAVDA-live aim at gathering new facts (particularly temporal facts), and then T-URDF reconciles them. Finally these facts are stored in a (temporal) knowledge base, called T-YAGO. A SPARQL-like time-aware querying language, together with a visualization tool, are designed for T-YAGO. Temporal knowledge can also be applied for document summarization.}, }
Endnote
%0 Thesis %A Wang, Yafang %Y Weikum, Gerhard %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Methods and Tools for Temporal Knowledge Harvesting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3892-2 %F OTHER: Local-ID: 142737B17504ED10C1257B19006B30E4-Wang-thesis2013 %U urn:nbn:de:bsz:291-scidok-50967 %I Universität des Saarlandes %C Saarbrücken %D 2013 %V phd %9 phd %X \chapterAbstract} To extend the traditional knowledge base with temporal dimension, this thesis offers methods and tools for harvesting temporal facts from both semi-structured and textual sources. Our contributions are briefly summarized as follows. \begin{enumerate} \item{\bf Timely YAGO:} A temporal knowledge base called Timely YAGO (T-YAGO) which extends YAGO with temporal attributes is built. We define a simple RDF-style data model to support temporal knowledge. \item{\bf PRAVDA:} To be able to harvest as many temporal facts from free-text as possible, we develop a system PRAVDA. It utilizes a graph-based semi-supervised learning algorithm to extract fact observations, which are further cleaned up by an Integer Linear Program based constraint solver. We also attempt to harvest spatio-temporal facts to track a person's trajectory. \item{\bf PRAVDA-live:} A user-centric interactive knowledge harvesting system, called PRAVDA-live, is developed for extracting facts from natural language free-text. It is built on the framework of PRAVDA. It supports fact extraction of user-defined relations from ad-hoc selected text documents and ready-to-use RDF exports. \item{\bf T-URDF:} We present a simple and efficient representation model for time-dependent uncertainty in combination with first-order inference rules and recursive queries over RDF-like knowledge bases. We adopt the common possible-worlds semantics known from probabilistic databases and extend it towards histogram-like confidence distributions that capture the validity of facts across time. \end{enumerate All of these components are fully implemented systems, which together form an integrative architecture. PRAVDA and PRAVDA-live aim at gathering new facts (particularly temporal facts), and then T-URDF reconciles them. Finally these facts are stored in a (temporal) knowledge base, called T-YAGO. A SPARQL-like time-aware querying language, together with a visualization tool, are designed for T-YAGO. Temporal knowledge can also be applied for document summarization. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5096/
2012
[90]
R. Awadallah, “Methods for Constructing an Opinion Network for Politically Controversial Topics,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
The US presidential race, the re-election of President Hugo Chavez, and the economic crisis in Greece and other European countries are some of the controversial topics being played on the news everyday. To understand the landscape of opinions on political controversies, it would be helpful to know which politician or other stakeholder takes which position - support or opposition - on specific aspects of these topics. The work described in this thesis aims to automatically derive a map of the opinions-people network from news and other Web documents. The focus is on acquiring opinions held by various stakeholders on politically controversial topics. This opinions-people network serves as a knowledge-base of opinions in the form of hopinion holderi hopinioni htopici triples. Our system to build this knowledge-base makes use of online news sources in order to extract opinions from text snippets. These sources come with a set of unique challenges. For example, processing text snippets involves not just identifying the topic and the opinion, but also attributing that opinion to a specific opinion holder. This requires making use of deep parsing and analyzing the parse tree. Moreover, in order to ensure uniformity, both the topic as well the opinion holder should be mapped to canonical strings, and the topics should also be organized into a hierarchy. Our system relies on two main components: i) acquiring opinions which uses a combination of techniques to extract opinions from online news sources, and ii) organizing topics which crawls and extracts debates from online sources, and organizes these debates in a hierarchy of political controversial topics. We present systematic evaluations of the different components of our system, and show their high accuracies. We also present some of the different kinds of applications that require political analysis. We present some application requires political analysis such as identifying flip-floppers, political bias, and dissenters. Such applications can make use of the knowledge-base of opinions.
Export
BibTeX
@phdthesis{AwadallahPhd2012, TITLE = {Methods for Constructing an Opinion Network for Politically Controversial Topics}, AUTHOR = {Awadallah, Rawia}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {The US presidential race, the re-election of President Hugo Chavez, and the economic crisis in Greece and other European countries are some of the controversial topics being played on the news everyday. To understand the landscape of opinions on political controversies, it would be helpful to know which politician or other stakeholder takes which position -- support or opposition -- on specific aspects of these topics. The work described in this thesis aims to automatically derive a map of the opinions-people network from news and other Web documents. The focus is on acquiring opinions held by various stakeholders on politically controversial topics. This opinions-people network serves as a knowledge-base of opinions in the form of hopinion holderi hopinioni htopici triples. Our system to build this knowledge-base makes use of online news sources in order to extract opinions from text snippets. These sources come with a set of unique challenges. For example, processing text snippets involves not just identifying the topic and the opinion, but also attributing that opinion to a specific opinion holder. This requires making use of deep parsing and analyzing the parse tree. Moreover, in order to ensure uniformity, both the topic as well the opinion holder should be mapped to canonical strings, and the topics should also be organized into a hierarchy. Our system relies on two main components: i) acquiring opinions which uses a combination of techniques to extract opinions from online news sources, and ii) organizing topics which crawls and extracts debates from online sources, and organizes these debates in a hierarchy of political controversial topics. We present systematic evaluations of the different components of our system, and show their high accuracies. We also present some of the different kinds of applications that require political analysis. We present some application requires political analysis such as identifying flip-floppers, political bias, and dissenters. Such applications can make use of the knowledge-base of opinions.}, }
Endnote
%0 Thesis %A Awadallah, Rawia %Y Weikum, Gerhard %A referee: Rauber, Andreas %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Methods for Constructing an Opinion Network for Politically Controversial Topics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CC92-8 %I Universität des Saarlandes %C Saarbrücken %D 2012 %V phd %9 phd %X The US presidential race, the re-election of President Hugo Chavez, and the economic crisis in Greece and other European countries are some of the controversial topics being played on the news everyday. To understand the landscape of opinions on political controversies, it would be helpful to know which politician or other stakeholder takes which position - support or opposition - on specific aspects of these topics. The work described in this thesis aims to automatically derive a map of the opinions-people network from news and other Web documents. The focus is on acquiring opinions held by various stakeholders on politically controversial topics. This opinions-people network serves as a knowledge-base of opinions in the form of hopinion holderi hopinioni htopici triples. Our system to build this knowledge-base makes use of online news sources in order to extract opinions from text snippets. These sources come with a set of unique challenges. For example, processing text snippets involves not just identifying the topic and the opinion, but also attributing that opinion to a specific opinion holder. This requires making use of deep parsing and analyzing the parse tree. Moreover, in order to ensure uniformity, both the topic as well the opinion holder should be mapped to canonical strings, and the topics should also be organized into a hierarchy. Our system relies on two main components: i) acquiring opinions which uses a combination of techniques to extract opinions from online news sources, and ii) organizing topics which crawls and extracts debates from online sources, and organizes these debates in a hierarchy of political controversial topics. We present systematic evaluations of the different components of our system, and show their high accuracies. We also present some of the different kinds of applications that require political analysis. We present some application requires political analysis such as identifying flip-floppers, political bias, and dissenters. Such applications can make use of the knowledge-base of opinions. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5037/
[91]
A. Baak, “Retrieval-based Approaches for Tracking and Reconstructing Human Motions,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{PhDThesisBaak, TITLE = {Retrieval-based Approaches for Tracking and Reconstructing Human Motions}, AUTHOR = {Baak, Andreas}, LANGUAGE = {eng}, LOCALID = {Local-ID: BEB52808520FB526C1257AEE003A0264-PhDThesisBaak}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012-11}, }
Endnote
%0 Thesis %A Baak, Andreas %Y Rosenhahn, Bodo %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Retrieval-based Approaches for Tracking and Reconstructing Human Motions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F4E1-1 %F OTHER: Local-ID: BEB52808520FB526C1257AEE003A0264-PhDThesisBaak %I Universität des Saarlandes %C Saarbrücken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5029/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[92]
A. Broschart, “Efficient Query Processing and Index Tuning Using Proximity Scores,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
n the presence of growing data, the need for efficient query processing under result quality and index size control becomes more and more a challenge to search engines. We show how to use proximity scores to make query processing effective and efficient with focus on either of the optimization goals. More precisely, we make the following contributions: • We present a comprehensive comparative analysis of proximity score models and a rigorous analysis of the potential of phrases and adapt a leading proximity score model for XML data. • We discuss the feasibility of all presented proximity score models for top-k query processing and present a novel index combining a content and proximity score that helps to accelerate top-k query processing and improves result quality. • We present a novel, distributed index tuning framework for term and term pair index lists that optimizes pruning parameters by means of well-defined optimization criteria under disk space constraints. Indexes can be tuned with emphasis on efficiency or effectiveness: the resulting indexes yield fast processing at high result quality. • We show that pruned index lists processed with a merge join outperform top-k query processing with unpruned lists at a high result quality. • Moreover, we present a hybrid index structure for improved cold cache run times.
Export
BibTeX
@phdthesis{Broschart_PhD2012, TITLE = {Efficient Query Processing and Index Tuning Using Proximity Scores}, AUTHOR = {Broschart, Andreas}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-DE4B2520B99264A3C1257B1900434A8C-Broschart_PhD2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {n the presence of growing data, the need for efficient query processing under result quality and index size control becomes more and more a challenge to search engines. We show how to use proximity scores to make query processing effective and efficient with focus on either of the optimization goals. More precisely, we make the following contributions: \mbox{$\bullet$} We present a comprehensive comparative analysis of proximity score models and a rigorous analysis of the potential of phrases and adapt a leading proximity score model for XML data. \mbox{$\bullet$} We discuss the feasibility of all presented proximity score models for top-k query processing and present a novel index combining a content and proximity score that helps to accelerate top-k query processing and improves result quality. \mbox{$\bullet$} We present a novel, distributed index tuning framework for term and term pair index lists that optimizes pruning parameters by means of well-defined optimization criteria under disk space constraints. Indexes can be tuned with emphasis on efficiency or effectiveness: the resulting indexes yield fast processing at high result quality. \mbox{$\bullet$} We show that pruned index lists processed with a merge join outperform top-k query processing with unpruned lists at a high result quality. \mbox{$\bullet$} Moreover, we present a hybrid index structure for improved cold cache run times.}, }
Endnote
%0 Thesis %A Broschart, Andreas %Y Schenkel, Ralf %Y Suel, Torsten %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Efficient Query Processing and Index Tuning Using Proximity Scores : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6275-D %F EDOC: 647546 %F OTHER: Local-ID: C1256DBF005F876D-DE4B2520B99264A3C1257B1900434A8C-Broschart_PhD2012 %I Universität des Saarlandes %C Saarbrücken %D 2012 %V phd %9 phd %X n the presence of growing data, the need for efficient query processing under result quality and index size control becomes more and more a challenge to search engines. We show how to use proximity scores to make query processing effective and efficient with focus on either of the optimization goals. More precisely, we make the following contributions: • We present a comprehensive comparative analysis of proximity score models and a rigorous analysis of the potential of phrases and adapt a leading proximity score model for XML data. • We discuss the feasibility of all presented proximity score models for top-k query processing and present a novel index combining a content and proximity score that helps to accelerate top-k query processing and improves result quality. • We present a novel, distributed index tuning framework for term and term pair index lists that optimizes pruning parameters by means of well-defined optimization criteria under disk space constraints. Indexes can be tuned with emphasis on efficiency or effectiveness: the resulting indexes yield fast processing at high result quality. • We show that pruned index lists processed with a merge join outperform top-k query processing with unpruned lists at a high result quality. • Moreover, we present a hybrid index structure for improved cold cache run times. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4981/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[93]
T. Crecelius, “Socially Enhanced Search and Exploration in Social Tagging Networks,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Social tagging networks have become highly popular for publishing and searching contents. Users in such networks can review, rate and comment on contents, or annotate them with keywords (social tags) to give short but exact text representations of even non-textual contents. In addition, there is an inherent support for interactions and relationships among users. Thus, users naturally form groups of friends or of common interests. We address three research areas in our work utilising these intrinsic features of social tagging networks. (1) We investigate new approaches for exploiting the social knowledge of and the relationships between users for searching and recommending relevant contents, and integrate them in a comprehensive framework, coined SENSE, for search in social tagging networks. (2) To dynamically update precomputed lists of transitive friends in descending order of their distance in user graphs of social tagging networks, we provide an algorithm for incrementally solving the all pairs shortest distance problem in large, disk-resident graphs and formally prove its correctness. (3) Since users are content providers in social tagging networks, users may keep their own data at independent, local peers that collaborate in a distributed P2P network. We provide an algorithm for such systems to counter cheating of peers in authority computations over social networks. The viability of each solution is demonstrated by extensive experiments regarding effectiveness and efficiency.
Export
BibTeX
@phdthesis{Crecelius2012, TITLE = {Socially Enhanced Search and Exploration in Social Tagging Networks}, AUTHOR = {Crecelius, Tom}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-48548}, LOCALID = {Local-ID: C1256DBF005F876D-09A3BA69BFF35ED9C12579FA002F601D-Crecelius2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Social tagging networks have become highly popular for publishing and searching contents. Users in such networks can review, rate and comment on contents, or annotate them with keywords (social tags) to give short but exact text representations of even non-textual contents. In addition, there is an inherent support for interactions and relationships among users. Thus, users naturally form groups of friends or of common interests. We address three research areas in our work utilising these intrinsic features of social tagging networks. (1) We investigate new approaches for exploiting the social knowledge of and the relationships between users for searching and recommending relevant contents, and integrate them in a comprehensive framework, coined SENSE, for search in social tagging networks. (2) To dynamically update precomputed lists of transitive friends in descending order of their distance in user graphs of social tagging networks, we provide an algorithm for incrementally solving the all pairs shortest distance problem in large, disk-resident graphs and formally prove its correctness. (3) Since users are content providers in social tagging networks, users may keep their own data at independent, local peers that collaborate in a distributed P2P network. We provide an algorithm for such systems to counter cheating of peers in authority computations over social networks. The viability of each solution is demonstrated by extensive experiments regarding effectiveness and efficiency.}, }
Endnote
%0 Thesis %A Crecelius, Tom %Y Schenkel, Ralf %A referee: Amer-Yahia, Sihem %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Socially Enhanced Search and Exploration in Social Tagging Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-620B-C %F EDOC: 647462 %F OTHER: Local-ID: C1256DBF005F876D-09A3BA69BFF35ED9C12579FA002F601D-Crecelius2012 %U urn:nbn:de:bsz:291-scidok-48548 %I Universität des Saarlandes %C Saarbrücken %D 2012 %P 238 p. %V phd %9 phd %X Social tagging networks have become highly popular for publishing and searching contents. Users in such networks can review, rate and comment on contents, or annotate them with keywords (social tags) to give short but exact text representations of even non-textual contents. In addition, there is an inherent support for interactions and relationships among users. Thus, users naturally form groups of friends or of common interests. We address three research areas in our work utilising these intrinsic features of social tagging networks. (1) We investigate new approaches for exploiting the social knowledge of and the relationships between users for searching and recommending relevant contents, and integrate them in a comprehensive framework, coined SENSE, for search in social tagging networks. (2) To dynamically update precomputed lists of transitive friends in descending order of their distance in user graphs of social tagging networks, we provide an algorithm for incrementally solving the all pairs shortest distance problem in large, disk-resident graphs and formally prove its correctness. (3) Since users are content providers in social tagging networks, users may keep their own data at independent, local peers that collaborate in a distributed P2P network. We provide an algorithm for such systems to counter cheating of peers in authority computations over social networks. The viability of each solution is demonstrated by extensive experiments regarding effectiveness and efficiency. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4854/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[94]
D. Denev, “Methods and Models for Web Archive Crawling,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Web archives offer a rich and plentiful source of information to researchers, analysts, and legal experts. For this purpose, they gather Web sites as the sites change over time. In order to keep up to high standards of data quality, Web archives have to collect all versions of the Web sites. Due to limited resuources and technical constraints this is not possible. Therefore, Web archives consist of versions archived at various time points without guarantee for mutual consistency. This thesis presents a model for assessing the data quality in Web archives as well as a family of crawling strategies yielding high-quality captures. We distinguish between single-visit crawling strategies for exploratory and visit-revisit crawling strategies for evidentiary purposes. Single-visit strategies download every page exactly once aiming for an ``undistorted'' capture of the ever-changing Web. We express the quality of such the resulting capture with the ``blur'' quality measure. In contrast, visit-revisit strategies download every page twice. The initial downloads of all pages form the visit phase of the crawling strategy. The second downloads are grouped together in the revisit phase. These two phases enable us to check which pages changed during the crawling process. Thus, we can identify the pages that are consistent with each other. The quality of the visit-revisit captures is expressed by the ``coherence'' measure. Quality-conscious strategies are based on predictions of the change behaviour of individual pages. We model the Web site dynamics by Poisson processes with page-specific change rates. Furthermore, we show that these rates can be statistically predicted. Finally, we propose visualization techniques for exploring the quality of the resulting Web archives. A fully functional prototype demonstrates the practical viability of our approach.
Export
BibTeX
@phdthesis{DenevPhD2012, TITLE = {Methods and Models for Web Archive Crawling}, AUTHOR = {Denev, Dimitar}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-92B687F6B976DAC4C1257A65004F67A6-DenevPhD2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Web archives offer a rich and plentiful source of information to researchers, analysts, and legal experts. For this purpose, they gather Web sites as the sites change over time. In order to keep up to high standards of data quality, Web archives have to collect all versions of the Web sites. Due to limited resuources and technical constraints this is not possible. Therefore, Web archives consist of versions archived at various time points without guarantee for mutual consistency. This thesis presents a model for assessing the data quality in Web archives as well as a family of crawling strategies yielding high-quality captures. We distinguish between single-visit crawling strategies for exploratory and visit-revisit crawling strategies for evidentiary purposes. Single-visit strategies download every page exactly once aiming for an ``undistorted'' capture of the ever-changing Web. We express the quality of such the resulting capture with the ``blur'' quality measure. In contrast, visit-revisit strategies download every page twice. The initial downloads of all pages form the visit phase of the crawling strategy. The second downloads are grouped together in the revisit phase. These two phases enable us to check which pages changed during the crawling process. Thus, we can identify the pages that are consistent with each other. The quality of the visit-revisit captures is expressed by the ``coherence'' measure. Quality-conscious strategies are based on predictions of the change behaviour of individual pages. We model the Web site dynamics by Poisson processes with page-specific change rates. Furthermore, we show that these rates can be statistically predicted. Finally, we propose visualization techniques for exploring the quality of the resulting Web archives. A fully functional prototype demonstrates the practical viability of our approach.}, }
Endnote
%0 Thesis %A Denev, Dimitar %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Methods and Models for Web Archive Crawling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6217-1 %F EDOC: 647475 %F OTHER: Local-ID: C1256DBF005F876D-92B687F6B976DAC4C1257A65004F67A6-DenevPhD2012 %I Universität des Saarlandes %C Saarbrücken %D 2012 %V phd %9 phd %X Web archives offer a rich and plentiful source of information to researchers, analysts, and legal experts. For this purpose, they gather Web sites as the sites change over time. In order to keep up to high standards of data quality, Web archives have to collect all versions of the Web sites. Due to limited resuources and technical constraints this is not possible. Therefore, Web archives consist of versions archived at various time points without guarantee for mutual consistency. This thesis presents a model for assessing the data quality in Web archives as well as a family of crawling strategies yielding high-quality captures. We distinguish between single-visit crawling strategies for exploratory and visit-revisit crawling strategies for evidentiary purposes. Single-visit strategies download every page exactly once aiming for an ``undistorted'' capture of the ever-changing Web. We express the quality of such the resulting capture with the ``blur'' quality measure. In contrast, visit-revisit strategies download every page twice. The initial downloads of all pages form the visit phase of the crawling strategy. The second downloads are grouped together in the revisit phase. These two phases enable us to check which pages changed during the crawling process. Thus, we can identify the pages that are consistent with each other. The quality of the visit-revisit captures is expressed by the ``coherence'' measure. Quality-conscious strategies are based on predictions of the change behaviour of individual pages. We model the Web site dynamics by Poisson processes with page-specific change rates. Furthermore, we show that these rates can be statistically predicted. Finally, we propose visualization techniques for exploring the quality of the resulting Web archives. A fully functional prototype demonstrates the practical viability of our approach. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4937/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[95]
P. Didyk, “Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{Didyk2012, TITLE = {Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System}, AUTHOR = {Didyk, Piotr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49311}, LOCALID = {Local-ID: 92393E91F27D5B62C1257A710042EDA1-Didyk2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Didyk, Piotr %A referee: Seidel, Hans-Peter %Y Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-13CC-E %F OTHER: Local-ID: 92393E91F27D5B62C1257A710042EDA1-Didyk2012 %U urn:nbn:de:bsz:291-scidok-49311 %I Universität des Saarlandes %C Saarbrücken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2012/4931/
[96]
S. Ebert, “Semi-supervised Learning for Image Classification,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Object class recognition is an active topic in computer vision still presenting many challenges. In most approaches, this task is addressed by supervised learning algorithms that need a large quantity of labels to perform well. This leads either to small datasets (< 10,000 images) that capture only a subset of the real-world class distribution (but with a controlled and verified labeling procedure), or to large datasets that are more representative but also add more label noise. Therefore, semi-supervised learning is a promising direction. It requires only few labels while simultaneously making use of the vast amount of images available today. We address object class recognition with semi-supervised learning. These algorithms depend on the underlying structure given by the data, the image description, and the similarity measure, and the quality of the labels. This insight leads to the main research questions of this thesis: Is the structure given by labeled and unlabeled data more important than the algorithm itself? Can we improve this neighborhood structure by a better similarity metric or with more representative unlabeled data? Is there a connection between the quality of labels and the overall performance and how can we get more representative labels? We answer all these questions, i.e., we provide an extensive evaluation, we propose several graph improvements, and we introduce a novel active learning framework to get more representative labels.
Export
BibTeX
@phdthesis{EbertDiss2012, TITLE = {Semi-supervised Learning for Image Classification}, AUTHOR = {Ebert, Sandra}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-52659}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Object class recognition is an active topic in computer vision still presenting many challenges. In most approaches, this task is addressed by supervised learning algorithms that need a large quantity of labels to perform well. This leads either to small datasets (< 10,000 images) that capture only a subset of the real-world class distribution (but with a controlled and verified labeling procedure), or to large datasets that are more representative but also add more label noise. Therefore, semi-supervised learning is a promising direction. It requires only few labels while simultaneously making use of the vast amount of images available today. We address object class recognition with semi-supervised learning. These algorithms depend on the underlying structure given by the data, the image description, and the similarity measure, and the quality of the labels. This insight leads to the main research questions of this thesis: Is the structure given by labeled and unlabeled data more important than the algorithm itself? Can we improve this neighborhood structure by a better similarity metric or with more representative unlabeled data? Is there a connection between the quality of labels and the overall performance and how can we get more representative labels? We answer all these questions, i.e., we provide an extensive evaluation, we propose several graph improvements, and we introduce a novel active learning framework to get more representative labels.}, }
Endnote
%0 Thesis %A Ebert, Sandra %Y Schiele, Bernt %A referee: Bischof, Horst %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Semi-supervised Learning for Image Classification : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0013-F787-B %F OTHER: A876B5595E818773C1257B19003EA758-EbertDiss2012 %U urn:nbn:de:bsz:291-scidok-52659 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P XI, 163 p. %V phd %9 phd %X Object class recognition is an active topic in computer vision still presenting many challenges. In most approaches, this task is addressed by supervised learning algorithms that need a large quantity of labels to perform well. This leads either to small datasets (< 10,000 images) that capture only a subset of the real-world class distribution (but with a controlled and verified labeling procedure), or to large datasets that are more representative but also add more label noise. Therefore, semi-supervised learning is a promising direction. It requires only few labels while simultaneously making use of the vast amount of images available today. We address object class recognition with semi-supervised learning. These algorithms depend on the underlying structure given by the data, the image description, and the similarity measure, and the quality of the labels. This insight leads to the main research questions of this thesis: Is the structure given by labeled and unlabeled data more important than the algorithm itself? Can we improve this neighborhood structure by a better similarity metric or with more representative unlabeled data? Is there a connection between the quality of labels and the overall performance and how can we get more representative labels? We answer all these questions, i.e., we provide an extensive evaluation, we propose several graph improvements, and we introduce a novel active learning framework to get more representative labels. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5265/
[97]
S. Elbassuoni, “Effective Searching of RDF Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{Elbassuoni2011, TITLE = {Effective Searching of {RDF} Knowledge Bases}, AUTHOR = {Elbassuoni, Shady}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-5AC1FB349CA835F1C12579AB002FFB29-Elbassuoni2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Elbassuoni, Shady %Y Weikum, Gerhard %A referee: Nejdl, Wolfgang %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Effective Searching of RDF Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-5FFC-4 %F EDOC: 647461 %F OTHER: Local-ID: C1256DBF005F876D-5AC1FB349CA835F1C12579AB002FFB29-Elbassuoni2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2012/4708/
[98]
P. Emeliyanenko, “Harnessing the Power of GPUs for Problems in Real Algebraic Geometry,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{PhDEmeliyanenko12, TITLE = {Harnessing the Power of {GPUs} for Problems in Real Algebraic Geometry}, AUTHOR = {Emeliyanenko, Pavel}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49953}, LOCALID = {Local-ID: 67210896377E6C6CC1257AFB006221B2-PhDEmeliyanenko12}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Emeliyanenko, Pavel %Y Mehlhorn, Kurt %A referee: Sagraloff, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Harnessing the Power of GPUs for Problems in Real Algebraic Geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B940-C %F OTHER: Local-ID: 67210896377E6C6CC1257AFB006221B2-PhDEmeliyanenko12 %U urn:nbn:de:bsz:291-scidok-49953 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P 168 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4995/pdf/thesis.pdfhttp://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php
[99]
M. Fouz, “Randomized Rumor Spreading in Social Networks & Complete Graphs,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{FouzDiss2012, TITLE = {Randomized Rumor Spreading in Social Networks \& Complete Graphs}, AUTHOR = {Fouz, Mahmoud}, LANGUAGE = {eng}, LOCALID = {Local-ID: 21D54E873E79BCA6C1257B0C00400A64-FouzDiss2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Fouz, Mahmoud %Y Doerr, Benjamin %A referee: Bl&#228;ser, Markus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Randomized Rumor Spreading in Social Networks & Complete Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B911-5 %F OTHER: Local-ID: 21D54E873E79BCA6C1257B0C00400A64-FouzDiss2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P II,114 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.phphttp://scidok.sulb.uni-saarland.de/volltexte/2012/4903/pdf/thesis_1672012.pdf
[100]
P. M. Grosche, “Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
The goal of music information retrieval (MIR) is to develop novel strategies and techniques for organizing, exploring, accessing, and understanding music data in an efficient manner. The conversion of waveform-based audio data into semantically meaningful feature representations by the use of digital signal processing techniques is at the center of MIR and constitutes a difficult field of research because of the complexity and diversity of music signals. In this thesis, we introduce novel signal processing methods that allow for extracting musically meaningful information from audio signals. As main strategy, we exploit musical knowledge about the signals' properties to derive feature representations that show a significant degree of robustness against musical variations but still exhibit a high musical expressiveness. We apply this general strategy to three different areas of MIR: Firstly, we introduce novel techniques for extracting tempo and beat information, where we particularly consider challenging music with changing tempo and soft note onsets. Secondly, we present novel algorithms for the automated segmentation and analysis of folk song field recordings, where one has to cope with significant fluctuations in intonation and tempo as well as recording artifacts. Thirdly, we explore a cross-version approach to content-based music retrieval based on the query-by-example paradigm. In all three areas, we focus on application scenarios where strong musical variations make the extraction of musically meaningful information a challenging task.
Export
BibTeX
@phdthesis{Grosche2012, TITLE = {Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval}, AUTHOR = {Grosche, Peter Matthias}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-50576}, LOCALID = {Local-ID: 0C70626E41A89315C1257AE1004F5255-Grosche2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {The goal of music information retrieval (MIR) is to develop novel strategies and techniques for organizing, exploring, accessing, and understanding music data in an efficient manner. The conversion of waveform-based audio data into semantically meaningful feature representations by the use of digital signal processing techniques is at the center of MIR and constitutes a difficult field of research because of the complexity and diversity of music signals. In this thesis, we introduce novel signal processing methods that allow for extracting musically meaningful information from audio signals. As main strategy, we exploit musical knowledge about the signals' properties to derive feature representations that show a significant degree of robustness against musical variations but still exhibit a high musical expressiveness. We apply this general strategy to three different areas of MIR: Firstly, we introduce novel techniques for extracting tempo and beat information, where we particularly consider challenging music with changing tempo and soft note onsets. Secondly, we present novel algorithms for the automated segmentation and analysis of folk song field recordings, where one has to cope with significant fluctuations in intonation and tempo as well as recording artifacts. Thirdly, we explore a cross-version approach to content-based music retrieval based on the query-by-example paradigm. In all three areas, we focus on application scenarios where strong musical variations make the extraction of musically meaningful information a challenging task.}, }
Endnote
%0 Thesis %A Grosche, Peter Matthias %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-0D64-1 %F OTHER: Local-ID: 0C70626E41A89315C1257AE1004F5255-Grosche2012 %U urn:nbn:de:bsz:291-scidok-50576 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X The goal of music information retrieval (MIR) is to develop novel strategies and techniques for organizing, exploring, accessing, and understanding music data in an efficient manner. The conversion of waveform-based audio data into semantically meaningful feature representations by the use of digital signal processing techniques is at the center of MIR and constitutes a difficult field of research because of the complexity and diversity of music signals. In this thesis, we introduce novel signal processing methods that allow for extracting musically meaningful information from audio signals. As main strategy, we exploit musical knowledge about the signals' properties to derive feature representations that show a significant degree of robustness against musical variations but still exhibit a high musical expressiveness. We apply this general strategy to three different areas of MIR: Firstly, we introduce novel techniques for extracting tempo and beat information, where we particularly consider challenging music with changing tempo and soft note onsets. Secondly, we present novel algorithms for the automated segmentation and analysis of folk song field recordings, where one has to cope with significant fluctuations in intonation and tempo as well as recording artifacts. Thirdly, we explore a cross-version approach to content-based music retrieval based on the query-by-example paradigm. In all three areas, we focus on application scenarios where strong musical variations make the extraction of musically meaningful information a challenging task. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5057/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[101]
D. Günther, “Topological Analysis of Discrete Scalar Data,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
This thesis presents a novel computational framework that allows for a robust extraction and quantification of the Morse-Smale complex of a scalar field given on a 2- or 3-dimensional manifold. The proposed framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed complex. Using a graph theoretical formulation of this theory, we present an algorithmic library that computes the Morse-Smale complex combinatorially with an optimal complexity of O(n^2) and efficiently creates a multi-level representation of it. We explore the discrete nature of this complex, and relate it to the smooth counterpart. It is often necessary to estimate the feature strength of the individual components of the Morse-Smale complex -- the critical points and separatrices. To do so, we propose a novel output-sensitive strategy to compute the persistence of the critical points. We also extend this wellfounded concept to separatrices by introducing a novel measure of feature strength called separatrix persistence. We evaluate the applicability of our methods in a wide variety of application areas ranging from computer graphics to planetary science to computer and electron tomography.
Export
BibTeX
@phdthesis{guenther12phd, TITLE = {Topological Analysis of Discrete Scalar Data}, AUTHOR = {G{\"u}nther, David}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-50563}, LOCALID = {Local-ID: 810A1DC7D88F9AD6C1257AFD003684DC-guenther12phd}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {This thesis presents a novel computational framework that allows for a robust extraction and quantification of the Morse-Smale complex of a scalar field given on a 2- or 3-dimensional manifold. The proposed framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed complex. Using a graph theoretical formulation of this theory, we present an algorithmic library that computes the Morse-Smale complex combinatorially with an optimal complexity of O(n^2) and efficiently creates a multi-level representation of it. We explore the discrete nature of this complex, and relate it to the smooth counterpart. It is often necessary to estimate the feature strength of the individual components of the Morse-Smale complex -- the critical points and separatrices. To do so, we propose a novel output-sensitive strategy to compute the persistence of the critical points. We also extend this wellfounded concept to separatrices by introducing a novel measure of feature strength called separatrix persistence. We evaluate the applicability of our methods in a wide variety of application areas ranging from computer graphics to planetary science to computer and electron tomography.}, }
Endnote
%0 Thesis %A G&#252;nther, David %Y Weinkauf, Timo %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Topological Analysis of Discrete Scalar Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F3DC-8 %F OTHER: Local-ID: 810A1DC7D88F9AD6C1257AFD003684DC-guenther12phd %U urn:nbn:de:bsz:291-scidok-50563 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X This thesis presents a novel computational framework that allows for a robust extraction and quantification of the Morse-Smale complex of a scalar field given on a 2- or 3-dimensional manifold. The proposed framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed complex. Using a graph theoretical formulation of this theory, we present an algorithmic library that computes the Morse-Smale complex combinatorially with an optimal complexity of O(n^2) and efficiently creates a multi-level representation of it. We explore the discrete nature of this complex, and relate it to the smooth counterpart. It is often necessary to estimate the feature strength of the individual components of the Morse-Smale complex -- the critical points and separatrices. To do so, we propose a novel output-sensitive strategy to compute the persistence of the critical points. We also extend this wellfounded concept to separatrices by introducing a novel measure of feature strength called separatrix persistence. We evaluate the applicability of our methods in a wide variety of application areas ranging from computer graphics to planetary science to computer and electron tomography. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5056/
[102]
C. Hritcu, “Union, Intersection, and Refinement Types and Reasoning About Type Disjointness for Security Protocol Analysis,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
In this thesis we present two new type systems for verifying the security of cryptographic protocol models expressed in a spi-calculus and, respectively, of protocol implementations expressed in a concurrent lambda calculus. The two type systems combine prior work on renement types with union and intersection types and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the typebased analyses of cryptographic protocols. In particular, our type systems can statically analyze protocols that are based on zero-knowledge proofs, even in scenarios when certain protocol participants are compromised. The analysis is scalable and provides security proofs for an unbounded number of protocol executions. The two type systems come with mechanized proofs of correctness and efficient implementations.
Export
BibTeX
@phdthesis{Hritcu2012, TITLE = {Union, Intersection, and Refinement Types and Reasoning About Type Disjointness for Security Protocol Analysis}, AUTHOR = {Hritcu, Catalin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {In this thesis we present two new type systems for verifying the security of cryptographic protocol models expressed in a spi-calculus and, respectively, of protocol implementations expressed in a concurrent lambda calculus. The two type systems combine prior work on renement types with union and intersection types and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the typebased analyses of cryptographic protocols. In particular, our type systems can statically analyze protocols that are based on zero-knowledge proofs, even in scenarios when certain protocol participants are compromised. The analysis is scalable and provides security proofs for an unbounded number of protocol executions. The two type systems come with mechanized proofs of correctness and efficient implementations.}, }
Endnote
%0 Thesis %A Hritcu, Catalin %Y Backes, Michael %A referee: Maffei, Matteo %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Cluster of Excellence Multimodal Computing and Interaction %T Union, Intersection, and Refinement Types and Reasoning About Type Disjointness for Security Protocol Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9EFA-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X In this thesis we present two new type systems for verifying the security of cryptographic protocol models expressed in a spi-calculus and, respectively, of protocol implementations expressed in a concurrent lambda calculus. The two type systems combine prior work on renement types with union and intersection types and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the typebased analyses of cryptographic protocols. In particular, our type systems can statically analyze protocols that are based on zero-knowledge proofs, even in scenarios when certain protocol participants are compromised. The analysis is scalable and provides security proofs for an unbounded number of protocol executions. The two type systems come with mechanized proofs of correctness and efficient implementations. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2012/4800/
[103]
V. Konz, “Automated Methods for Audio-based Music Analysis with Applications to Musicology,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{PhDThesisKonzVerena, TITLE = {Automated Methods for Audio-based Music Analysis with Applications to Musicology}, AUTHOR = {Konz, Verena}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49984}, LOCALID = {Local-ID: 017FB6407E6271CBC1257AEE00350D94-PhDThesisKonzVerena}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Konz, Verena %Y M&#252;ller, Meinard %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Automated Methods for Audio-based Music Analysis with Applications to Musicology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F698-C %U urn:nbn:de:bsz:291-scidok-49984 %F OTHER: Local-ID: 017FB6407E6271CBC1257AEE00350D94-PhDThesisKonzVerena %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4998/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[104]
Y. Mileva, “Mining the Evolution of Software Component Usage,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
The topic of this thesis is the analysis of the evolution of software components. In order to track the evolution of software components, one needs to collect the evolution information of each component. This information is stored in the version control system (VCS) of the project�the repository of the history of events happening throughout the project�s lifetime. By using software archive mining techniques one can extract and leverage this information. The main contribution of this thesis is the introduction of evolution usage trends and evolution change patterns. The raw information about the occurrences of each component is stored in the VCS of the project. By organizing it in evolution trends and patterns, we are able to draw conclusions and issue recommendations concerning each individual component and the project as a whole. Evolution Trends An evolution trend is a way to track the evolution of a software component throughout the span of the project. The trend shows the increases and decreases in the usage of a specific component, which can be indicative of the quality of this component. AKTARI is a tool, presented in this thesis, that is based on such evolution trends and can be used by the software developers to observe and draw conclusions about the behavior of their project. Evolution Patterns An evolution pattern is a pattern of a frequently occurring code change throughout the span of the project. Those frequently occurring changes are project-specific and are explanatory of the way the project evolves. Each such evolution pattern contains in itself the specific way �things are done� in the project and as such can serve for defect detection and defect prevention. The technique of mining evolution patterns is implemented as a basis for the LAMARCK tool, presented in this thesis.
Export
BibTeX
@phdthesis{Mileva2012, TITLE = {Mining the Evolution of Software Component Usage}, AUTHOR = {Mileva, Yana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {The topic of this thesis is the analysis of the evolution of software components. In order to track the evolution of software components, one needs to collect the evolution information of each component. This information is stored in the version control system (VCS) of the project{\diamond}the repository of the history of events happening throughout the project{\diamond}s lifetime. By using software archive mining techniques one can extract and leverage this information. The main contribution of this thesis is the introduction of evolution usage trends and evolution change patterns. The raw information about the occurrences of each component is stored in the VCS of the project. By organizing it in evolution trends and patterns, we are able to draw conclusions and issue recommendations concerning each individual component and the project as a whole. Evolution Trends An evolution trend is a way to track the evolution of a software component throughout the span of the project. The trend shows the increases and decreases in the usage of a specific component, which can be indicative of the quality of this component. AKTARI is a tool, presented in this thesis, that is based on such evolution trends and can be used by the software developers to observe and draw conclusions about the behavior of their project. Evolution Patterns An evolution pattern is a pattern of a frequently occurring code change throughout the span of the project. Those frequently occurring changes are project-specific and are explanatory of the way the project evolves. Each such evolution pattern contains in itself the specific way {\diamond}things are done{\diamond} in the project and as such can serve for defect detection and defect prevention. The technique of mining evolution patterns is implemented as a basis for the LAMARCK tool, presented in this thesis.}, }
Endnote
%0 Thesis %A Mileva, Yana %Y Zeller, Andreas %A referee: Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Mining the Evolution of Software Component Usage : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9F5C-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X The topic of this thesis is the analysis of the evolution of software components. In order to track the evolution of software components, one needs to collect the evolution information of each component. This information is stored in the version control system (VCS) of the project&#65533;the repository of the history of events happening throughout the project&#65533;s lifetime. By using software archive mining techniques one can extract and leverage this information. The main contribution of this thesis is the introduction of evolution usage trends and evolution change patterns. The raw information about the occurrences of each component is stored in the VCS of the project. By organizing it in evolution trends and patterns, we are able to draw conclusions and issue recommendations concerning each individual component and the project as a whole. Evolution Trends An evolution trend is a way to track the evolution of a software component throughout the span of the project. The trend shows the increases and decreases in the usage of a specific component, which can be indicative of the quality of this component. AKTARI is a tool, presented in this thesis, that is based on such evolution trends and can be used by the software developers to observe and draw conclusions about the behavior of their project. Evolution Patterns An evolution pattern is a pattern of a frequently occurring code change throughout the span of the project. Those frequently occurring changes are project-specific and are explanatory of the way the project evolves. Each such evolution pattern contains in itself the specific way &#65533;things are done&#65533; in the project and as such can serve for defect detection and defect prevention. The technique of mining evolution patterns is implemented as a basis for the LAMARCK tool, presented in this thesis. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4899/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[105]
N. Nakashole, “Automatic Extraction of Facts, Relations, and Entities for Web-scale Knowledge Base Population,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
quipping machines with knowledge, through the construction of machine-readable knowledge bases, presents a key asset for semantic search, machine translation, question answering, and other formidable challenges in artificial intelligence. However, human knowledge predominantly resides in books and other natural language text forms. This means that knowledge bases must be extracted and synthesized from natural language text. When the source of text is the Web, extraction methods must cope with ambiguity, noise, scale, and updates. The goal of this dissertation is to develop knowledge base population methods that address the afore mentioned characteristics of Web text. The dissertation makes three contributions. The first contribution is a method for mining high-quality facts at scale, through distributed constraint reasoning and a pattern representation model that is robust against noisy patterns. The second contribution is a method for mining a large comprehensive collection of relation types beyond those commonly found in existing knowledge bases. The third contribution is a method for extracting facts from dynamic Web sources such as news articles and social media where one of the key challenges is the constant emergence of new entities. All methods have been evaluated through experiments involving Web-scale text collections.
Export
BibTeX
@phdthesis{phdthesis-nakashole, TITLE = {Automatic Extraction of Facts, Relations, and Entities for Web-scale Knowledge Base Population}, AUTHOR = {Nakashole, Ndapandula}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-312844A683E3D3CFC1257AED006307CA-phdthesis-nakashole}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {quipping machines with knowledge, through the construction of machine-readable knowledge bases, presents a key asset for semantic search, machine translation, question answering, and other formidable challenges in artificial intelligence. However, human knowledge predominantly resides in books and other natural language text forms. This means that knowledge bases must be extracted and synthesized from natural language text. When the source of text is the Web, extraction methods must cope with ambiguity, noise, scale, and updates. The goal of this dissertation is to develop knowledge base population methods that address the afore mentioned characteristics of Web text. The dissertation makes three contributions. The first contribution is a method for mining high-quality facts at scale, through distributed constraint reasoning and a pattern representation model that is robust against noisy patterns. The second contribution is a method for mining a large comprehensive collection of relation types beyond those commonly found in existing knowledge bases. The third contribution is a method for extracting facts from dynamic Web sources such as news articles and social media where one of the key challenges is the constant emergence of new entities. All methods have been evaluated through experiments involving Web-scale text collections.}, }
Endnote
%0 Thesis %A Nakashole, Ndapandula %Y Weikum, Gerhard %A referee: Suchanek, Fabian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Automatic Extraction of Facts, Relations, and Entities for Web-scale Knowledge Base Population : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-627F-A %F EDOC: 647490 %F OTHER: Local-ID: C1256DBF005F876D-312844A683E3D3CFC1257AED006307CA-phdthesis-nakashole %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X quipping machines with knowledge, through the construction of machine-readable knowledge bases, presents a key asset for semantic search, machine translation, question answering, and other formidable challenges in artificial intelligence. However, human knowledge predominantly resides in books and other natural language text forms. This means that knowledge bases must be extracted and synthesized from natural language text. When the source of text is the Web, extraction methods must cope with ambiguity, noise, scale, and updates. The goal of this dissertation is to develop knowledge base population methods that address the afore mentioned characteristics of Web text. The dissertation makes three contributions. The first contribution is a method for mining high-quality facts at scale, through distributed constraint reasoning and a pattern representation model that is robust against noisy patterns. The second contribution is a method for mining a large comprehensive collection of relation types beyond those commonly found in existing knowledge bases. The third contribution is a method for extracting facts from dynamic Web sources such as news articles and social media where one of the key challenges is the constant emergence of new entities. All methods have been evaluated through experiments involving Web-scale text collections. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5054/
[106]
R. Osbild, “General Analysis Tool Box for Controlled Perturbation Algorithms and Complexity and Computation of Θ-Guarded Regions,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{OsbildPhD2013, TITLE = {General Analysis Tool Box for Controlled Perturbation Algorithms and Complexity and Computation of $\Theta$-Guarded Regions}, AUTHOR = {Osbild, Ralf}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55201}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Osbild, Ralf %Y Mehlhorn, Kurt %A referee: Seidel, Raimund %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T General Analysis Tool Box for Controlled Perturbation Algorithms and Complexity and Computation of &#920;-Guarded Regions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-068B-2 %U urn:nbn:de:bsz:291-scidok-55201 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P 136 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5520/
[107]
H.-J. Peter, “A Uniform Approach to the Complexity and Analysis of Succinct Systems,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
This thesis provides a unifying view on the succinctness of systems: the capability of a modeling formalism to describe the behavior of a system of exponential size using a polynomial syntax. The key theoretical contribution is the introduction of sequential circuit machines as a new universal computation model that focuses on succinctness as the central aspect. The thesis demonstrates that many well-known modeling formalisms such as communicating state machines, linear-time temporal logic, or timed automata exhibit an immediate connection to this machine model. Once a (syntactic) connection is established, many complexity bounds for structurally restricted sequential circuit machines can be transferred to a certain formalism in a uniform manner. As a consequence, besides a far-reaching unification of independent lines of research, we are also able to provide matching complexity bounds for various analysis problems, whose complexities were not known so far. For example, we establish matching lower and upper bounds of the small witness problem and several variants of the bounded synthesis problem for timed automata, a particularly important succinct modeling formalism. Also for timed automata, our complexity-theoretic analysis leads to the identification of tractable fragments of the timed synthesis problem under partial observability. Specifically, we identify timed controller synthesis based on discrete or template-based controllers to be equivalent to model checking. Based on this discovery, we develop a new model checking-based algorithm to efficiently find feasible template instantiations. From a more practical perspective, this thesis also studies the preservation of succinctness in analysis algorithms using symbolic data structures. While efficient techniques exist for specific forms of succinctness considered in isolation, we present a general approach based on abstraction refinement to combine off-the-shelf symbolic data structures. In particular, for handling the combination of concurrency and quantitative timing behavior in networks of timed automata, we report on the tool Synthia which combines binary decision diagrams with difference bound matrices. In a comparison with the timed model checker Uppaal and the timed game solver Uppaal- Tiga running on standard benchmarks from the timed model checking and synthesis domain, respectively, the experimental results clearly demonstrate the effectiveness of our new approach.
Export
BibTeX
@phdthesis{Peter2012, TITLE = {A Uniform Approach to the Complexity and Analysis of Succinct Systems}, AUTHOR = {Peter, Hans-J{\"o}rg}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {This thesis provides a unifying view on the succinctness of systems: the capability of a modeling formalism to describe the behavior of a system of exponential size using a polynomial syntax. The key theoretical contribution is the introduction of sequential circuit machines as a new universal computation model that focuses on succinctness as the central aspect. The thesis demonstrates that many well-known modeling formalisms such as communicating state machines, linear-time temporal logic, or timed automata exhibit an immediate connection to this machine model. Once a (syntactic) connection is established, many complexity bounds for structurally restricted sequential circuit machines can be transferred to a certain formalism in a uniform manner. As a consequence, besides a far-reaching unification of independent lines of research, we are also able to provide matching complexity bounds for various analysis problems, whose complexities were not known so far. For example, we establish matching lower and upper bounds of the small witness problem and several variants of the bounded synthesis problem for timed automata, a particularly important succinct modeling formalism. Also for timed automata, our complexity-theoretic analysis leads to the identification of tractable fragments of the timed synthesis problem under partial observability. Specifically, we identify timed controller synthesis based on discrete or template-based controllers to be equivalent to model checking. Based on this discovery, we develop a new model checking-based algorithm to efficiently find feasible template instantiations. From a more practical perspective, this thesis also studies the preservation of succinctness in analysis algorithms using symbolic data structures. While efficient techniques exist for specific forms of succinctness considered in isolation, we present a general approach based on abstraction refinement to combine off-the-shelf symbolic data structures. In particular, for handling the combination of concurrency and quantitative timing behavior in networks of timed automata, we report on the tool Synthia which combines binary decision diagrams with difference bound matrices. In a comparison with the timed model checker Uppaal and the timed game solver Uppaal- Tiga running on standard benchmarks from the timed model checking and synthesis domain, respectively, the experimental results clearly demonstrate the effectiveness of our new approach.}, }
Endnote
%0 Thesis %A Peter, Hans-J&#246;rg %Y Finkbeiner, Bernd %A referee: Raskin, Jean-Francois %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Uniform Approach to the Complexity and Analysis of Succinct Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9F65-6 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X This thesis provides a unifying view on the succinctness of systems: the capability of a modeling formalism to describe the behavior of a system of exponential size using a polynomial syntax. The key theoretical contribution is the introduction of sequential circuit machines as a new universal computation model that focuses on succinctness as the central aspect. The thesis demonstrates that many well-known modeling formalisms such as communicating state machines, linear-time temporal logic, or timed automata exhibit an immediate connection to this machine model. Once a (syntactic) connection is established, many complexity bounds for structurally restricted sequential circuit machines can be transferred to a certain formalism in a uniform manner. As a consequence, besides a far-reaching unification of independent lines of research, we are also able to provide matching complexity bounds for various analysis problems, whose complexities were not known so far. For example, we establish matching lower and upper bounds of the small witness problem and several variants of the bounded synthesis problem for timed automata, a particularly important succinct modeling formalism. Also for timed automata, our complexity-theoretic analysis leads to the identification of tractable fragments of the timed synthesis problem under partial observability. Specifically, we identify timed controller synthesis based on discrete or template-based controllers to be equivalent to model checking. Based on this discovery, we develop a new model checking-based algorithm to efficiently find feasible template instantiations. From a more practical perspective, this thesis also studies the preservation of succinctness in analysis algorithms using symbolic data structures. While efficient techniques exist for specific forms of succinctness considered in isolation, we present a general approach based on abstraction refinement to combine off-the-shelf symbolic data structures. In particular, for handling the combination of concurrency and quantitative timing behavior in networks of timed automata, we report on the tool Synthia which combines binary decision diagrams with difference bound matrices. In a comparison with the timed model checker Uppaal and the timed game solver Uppaal- Tiga running on standard benchmarks from the timed model checking and synthesis domain, respectively, the experimental results clearly demonstrate the effectiveness of our new approach. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2013/5482/
[108]
S. Popov, “Algorithms and Data Structures for Interactive Ray Tracing on Commodity Hardware,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Rendering methods based on ray tracing provide high image realism, but have been historically regarded as offline only. This has changed in the past decade, due to significant advances in the construction and traversal performance of acceleration structures and the efficient use of data-parallel processing. Today, all major graphics companies offer real-time ray tracing solutions. The following work has contributed to this development with some key insights. We first address the limited support of dynamic scenes in previous work, by proposing two new parallel-friendly construction algorithms for KD-trees and BVHs. By approximating the cost function, we accelerate construction by up to an order of magnitude (especially for BVHs), at the expense of only tiny degradation to traversal performance. For the static portions of the scene, we also address the topic of creating the "perfect" acceleration structure. We develop a polynomial time non-greedy BVH construction algorithm. We then modify it to produce a new type of acceleration structure that inherits both the high performance of KD-trees and the small size of BVHs. Finally, we focus on bringing real-time ray tracing to commodity desktop computers. We develop several new KD-tree and BVH traversal algorithms specically tailored for the GPU. With them, we show for the first time that GPU ray tracing is indeed feasible, and it can outperform CPU ray tracing by almost an order of magnitude, even on large CAD models.
Export
BibTeX
@phdthesis{Popov2012, TITLE = {Algorithms and Data Structures for Interactive Ray Tracing on Commodity Hardware}, AUTHOR = {Popov, Stefan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Rendering methods based on ray tracing provide high image realism, but have been historically regarded as offline only. This has changed in the past decade, due to significant advances in the construction and traversal performance of acceleration structures and the efficient use of data-parallel processing. Today, all major graphics companies offer real-time ray tracing solutions. The following work has contributed to this development with some key insights. We first address the limited support of dynamic scenes in previous work, by proposing two new parallel-friendly construction algorithms for KD-trees and BVHs. By approximating the cost function, we accelerate construction by up to an order of magnitude (especially for BVHs), at the expense of only tiny degradation to traversal performance. For the static portions of the scene, we also address the topic of creating the "perfect" acceleration structure. We develop a polynomial time non-greedy BVH construction algorithm. We then modify it to produce a new type of acceleration structure that inherits both the high performance of KD-trees and the small size of BVHs. Finally, we focus on bringing real-time ray tracing to commodity desktop computers. We develop several new KD-tree and BVH traversal algorithms specically tailored for the GPU. With them, we show for the first time that GPU ray tracing is indeed feasible, and it can outperform CPU ray tracing by almost an order of magnitude, even on large CAD models.}, }
Endnote
%0 Thesis %A Popov, Stefan %Y Slusallek, Philipp %A referee: Myszkowski, Karol %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Algorithms and Data Structures for Interactive Ray Tracing on Commodity Hardware : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9F67-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X Rendering methods based on ray tracing provide high image realism, but have been historically regarded as offline only. This has changed in the past decade, due to significant advances in the construction and traversal performance of acceleration structures and the efficient use of data-parallel processing. Today, all major graphics companies offer real-time ray tracing solutions. The following work has contributed to this development with some key insights. We first address the limited support of dynamic scenes in previous work, by proposing two new parallel-friendly construction algorithms for KD-trees and BVHs. By approximating the cost function, we accelerate construction by up to an order of magnitude (especially for BVHs), at the expense of only tiny degradation to traversal performance. For the static portions of the scene, we also address the topic of creating the "perfect" acceleration structure. We develop a polynomial time non-greedy BVH construction algorithm. We then modify it to produce a new type of acceleration structure that inherits both the high performance of KD-trees and the small size of BVHs. Finally, we focus on bringing real-time ray tracing to commodity desktop computers. We develop several new KD-tree and BVH traversal algorithms specically tailored for the GPU. With them, we show for the first time that GPU ray tracing is indeed feasible, and it can outperform CPU ray tracing by almost an order of magnitude, even on large CAD models. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4963/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[109]
F. Ramirez, “Novel Approaches to the Integration and Analysis of Systems Biology,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{Ramirez2012th, TITLE = {Novel Approaches to the Integration and Analysis of Systems Biology}, AUTHOR = {Ramirez, Fidel}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-48554}, LOCALID = {Local-ID: EAA85644FD5EB9F8C1257B2800408A00-Ramirez2012th}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Ramirez, Fidel %Y Lengauer, Thomas %A referee: Albrecht, Mario %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Novel Approaches to the Integration and Analysis of Systems Biology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B9E2-D %U urn:nbn:de:bsz:291-scidok-48554 %F OTHER: Local-ID: EAA85644FD5EB9F8C1257B2800408A00-Ramirez2012th %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P XVIII, 176 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.phphttp://scidok.sulb.uni-saarland.de/volltexte/2012/4855/pdf/thesis.pdf
[110]
L. Tolosi, “Finding Regions of Aberrant DNA Copy Number Associated With Tumor Phenotype,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{Tolosi2012th, TITLE = {Finding Regions of Aberrant {DNA} Copy Number Associated With Tumor Phenotype}, AUTHOR = {Tolosi, Laura}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49665}, LOCALID = {Local-ID: 8E6F4087F836C766C1257B280040B1E2-Tolosi2012th}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Tolosi, Laura %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %A referee: Rahnenf&#252;hrer, J&#246;rg %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Finding Regions of Aberrant DNA Copy Number Associated With Tumor Phenotype : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B9DE-A %F OTHER: Local-ID: 8E6F4087F836C766C1257B280040B1E2-Tolosi2012th %U urn:nbn:de:bsz:291-scidok-49665 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P XII, 175 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4966/pdf/Tolosi_PhD.pdfhttp://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php
[111]
P. Wischnewski, “Efficient Reasoning Procedures for Complex First-order Theories,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{Wischnewski12, TITLE = {Efficient Reasoning Procedures for Complex First-order Theories}, AUTHOR = {Wischnewski, Patrick}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49961}, LOCALID = {Local-ID: 09A72B09A52B038AC1257AF00040853F-Wischnewski12}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Wischnewski, Patrick %Y Weidenbach, Christoph %A referee: Weikum, Gerhard %A referee: Schaub, Torsten %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Efficient Reasoning Procedures for Complex First-order Theories : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B792-4 %U urn:nbn:de:bsz:291-scidok-49961 %F OTHER: Local-ID: 09A72B09A52B038AC1257AF00040853F-Wischnewski12 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2012/4996/
[112]
H. L. Zimmer, “Correspondence Problems in Computer Vision: Novel Models, Numerics, and Applications,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Correspondence problems like optic flow belong to the fundamental problems in computer vision. Here, one aims at finding correspondences between the pixels in two (or more) images. The correspondences are described by a displacement vector field that is often found by minimising an energy (cost) function. In this thesis, we present several contributions to the energy-based solution of correspondence problems: (i) We start by developing a robust data term with a high degree of invariance under illumination changes. Then, we design an anisotropic smoothness term that works complementary to the data term, thereby avoiding undesirable interference. Additionally, we propose a simple method for determining the optimal balance between the two terms. (ii) When discretising image derivatives that occur in our continuous models, we show that adopting one-sided upwind discretisations from the field of hyperbolic differential equations can be beneficial. To ensure a fast solution of the nonlinear system of equations that arises when minimising the energy, we use the recent fast explicit diffusion (FED) solver in an explicit gradient descent scheme. (iii) Finally, we present a novel application of modern optic flow methods where we align exposure series used in high dynamic range (HDR) imaging. Furthermore, we show how the alignment information can be used in a joint super-resolution and HDR method
Export
BibTeX
@phdthesis{Zimmer2012, TITLE = {Correspondence Problems in Computer Vision: Novel Models, Numerics, and Applications}, AUTHOR = {Zimmer, Henning Lars}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Correspondence problems like optic flow belong to the fundamental problems in computer vision. Here, one aims at finding correspondences between the pixels in two (or more) images. The correspondences are described by a displacement vector field that is often found by minimising an energy (cost) function. In this thesis, we present several contributions to the energy-based solution of correspondence problems: (i) We start by developing a robust data term with a high degree of invariance under illumination changes. Then, we design an anisotropic smoothness term that works complementary to the data term, thereby avoiding undesirable interference. Additionally, we propose a simple method for determining the optimal balance between the two terms. (ii) When discretising image derivatives that occur in our continuous models, we show that adopting one-sided upwind discretisations from the field of hyperbolic differential equations can be beneficial. To ensure a fast solution of the nonlinear system of equations that arises when minimising the energy, we use the recent fast explicit diffusion (FED) solver in an explicit gradient descent scheme. (iii) Finally, we present a novel application of modern optic flow methods where we align exposure series used in high dynamic range (HDR) imaging. Furthermore, we show how the alignment information can be used in a joint super-resolution and HDR method}, }
Endnote
%0 Thesis %A Zimmer, Henning Lars %Y Weikert, Joachim %A referee: Cremers, Daniel %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Correspondence Problems in Computer Vision: Novel Models, Numerics, and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A19D-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X Correspondence problems like optic flow belong to the fundamental problems in computer vision. Here, one aims at finding correspondences between the pixels in two (or more) images. The correspondences are described by a displacement vector field that is often found by minimising an energy (cost) function. In this thesis, we present several contributions to the energy-based solution of correspondence problems: (i) We start by developing a robust data term with a high degree of invariance under illumination changes. Then, we design an anisotropic smoothness term that works complementary to the data term, thereby avoiding undesirable interference. Additionally, we propose a simple method for determining the optimal balance between the two terms. (ii) When discretising image derivatives that occur in our continuous models, we show that adopting one-sided upwind discretisations from the field of hyperbolic differential equations can be beneficial. To ensure a fast solution of the nonlinear system of equations that arises when minimising the energy, we use the recent fast explicit diffusion (FED) solver in an explicit gradient descent scheme. (iii) Finally, we present a novel application of modern optic flow methods where we align exposure series used in high dynamic range (HDR) imaging. Furthermore, we show how the alignment information can be used in a joint super-resolution and HDR method %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2012/4679/
2011
[113]
J. Bogojeska, “Statistical Learning Methods for Bias-aware HIV Therapy Screening,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
The human immunodeficiency virus (HIV) is the causative agent of the acquired immunodeficiency syndrome (AIDS) which claimed nearly $30$ million lives and is arguably among the worst plagues in human history. With no cure or vaccine in sight, HIV patients are treated by administration of combinations of antiretroviral drugs. The very large number of such combinations makes the manual search for an effective therapy practically impossible, especially in advanced stages of the disease. Therapy selection can be supported by statistical methods that predict the outcomes of candidate therapies. However, these methods are based on clinical data sets that are biased in many ways. The main sources of bias are the evolving trends of treating HIV patients, the sparse, uneven therapy representation, the different treatment backgrounds of the clinical samples and the differing abundances of the various therapy-experience levels. In this thesis we focus on the problem of devising bias-aware statistical learning methods for HIV therapy screening -- predicting the effectiveness of HIV combination therapies. For this purpose we develop five novel approaches that when predicting outcomes of HIV therapies address the aforementioned biases in the clinical data sets. Three of the approaches aim for good prediction performance for every drug combination independent of its abundance in the HIV clinical data set. To achieve this, they balance the sparse and uneven therapy representation by using different routes of sharing common knowledge among related therapies. The remaining two approaches additionally account for the bias originating from the differing treatment histories of the samples making up the HIV clinical data sets. For this purpose, both methods predict the response of an HIV combination therapy by taking not only the most recent (target) therapy but also available information from preceding therapies into account. In this way they provide good predictions for advanced patients in mid to late stages of HIV treatment, and for rare drug combinations. All our methods use the time-oriented evaluation scenario, where models are trained on data from the less recent past while their performance is evaluated on data from the more recent past. This is the approach we adopt to account for the evolving treatment trends in the HIV clinical practice and thus offer a realistic model assessment.
Export
BibTeX
@phdthesis{BogojeskaPhD2011, TITLE = {Statistical Learning Methods for Bias-aware {HIV} Therapy Screening}, AUTHOR = {Bogojeska, Jasmina}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2012/4547/}, LOCALID = {Local-ID: C125673F004B2D7B-B354E6F7403CA747C1257975005027FD-BogojeskaPhD2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {The human immunodeficiency virus (HIV) is the causative agent of the acquired immunodeficiency syndrome (AIDS) which claimed nearly $30$ million lives and is arguably among the worst plagues in human history. With no cure or vaccine in sight, HIV patients are treated by administration of combinations of antiretroviral drugs. The very large number of such combinations makes the manual search for an effective therapy practically impossible, especially in advanced stages of the disease. Therapy selection can be supported by statistical methods that predict the outcomes of candidate therapies. However, these methods are based on clinical data sets that are biased in many ways. The main sources of bias are the evolving trends of treating HIV patients, the sparse, uneven therapy representation, the different treatment backgrounds of the clinical samples and the differing abundances of the various therapy-experience levels. In this thesis we focus on the problem of devising bias-aware statistical learning methods for HIV therapy screening -- predicting the effectiveness of HIV combination therapies. For this purpose we develop five novel approaches that when predicting outcomes of HIV therapies address the aforementioned biases in the clinical data sets. Three of the approaches aim for good prediction performance for every drug combination independent of its abundance in the HIV clinical data set. To achieve this, they balance the sparse and uneven therapy representation by using different routes of sharing common knowledge among related therapies. The remaining two approaches additionally account for the bias originating from the differing treatment histories of the samples making up the HIV clinical data sets. For this purpose, both methods predict the response of an HIV combination therapy by taking not only the most recent (target) therapy but also available information from preceding therapies into account. In this way they provide good predictions for advanced patients in mid to late stages of HIV treatment, and for rare drug combinations. All our methods use the time-oriented evaluation scenario, where models are trained on data from the less recent past while their performance is evaluated on data from the more recent past. This is the approach we adopt to account for the evolving treatment trends in the HIV clinical practice and thus offer a realistic model assessment.}, }
Endnote
%0 Thesis %A Bogojeska, Jasmina %Y Lengauer, Thomas %A referee: Rahnenf&#252;hrer, J&#246;rg %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Statistical Learning Methods for Bias-aware HIV Therapy Screening : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-119A-8 %F EDOC: 618815 %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4547/ %F OTHER: Local-ID: C125673F004B2D7B-B354E6F7403CA747C1257975005027FD-BogojeskaPhD2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 135 p. %V phd %9 phd %X The human immunodeficiency virus (HIV) is the causative agent of the acquired immunodeficiency syndrome (AIDS) which claimed nearly $30$ million lives and is arguably among the worst plagues in human history. With no cure or vaccine in sight, HIV patients are treated by administration of combinations of antiretroviral drugs. The very large number of such combinations makes the manual search for an effective therapy practically impossible, especially in advanced stages of the disease. Therapy selection can be supported by statistical methods that predict the outcomes of candidate therapies. However, these methods are based on clinical data sets that are biased in many ways. The main sources of bias are the evolving trends of treating HIV patients, the sparse, uneven therapy representation, the different treatment backgrounds of the clinical samples and the differing abundances of the various therapy-experience levels. In this thesis we focus on the problem of devising bias-aware statistical learning methods for HIV therapy screening -- predicting the effectiveness of HIV combination therapies. For this purpose we develop five novel approaches that when predicting outcomes of HIV therapies address the aforementioned biases in the clinical data sets. Three of the approaches aim for good prediction performance for every drug combination independent of its abundance in the HIV clinical data set. To achieve this, they balance the sparse and uneven therapy representation by using different routes of sharing common knowledge among related therapies. The remaining two approaches additionally account for the bias originating from the differing treatment histories of the samples making up the HIV clinical data sets. For this purpose, both methods predict the response of an HIV combination therapy by taking not only the most recent (target) therapy but also available information from preceding therapies into account. In this way they provide good predictions for advanced patients in mid to late stages of HIV treatment, and for rare drug combinations. All our methods use the time-oriented evaluation scenario, where models are trained on data from the less recent past while their performance is evaluated on data from the more recent past. This is the approach we adopt to account for the evolving treatment trends in the HIV clinical practice and thus offer a realistic model assessment. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2012/4547/
[114]
M. Bokeloh, “Symmetry in 3D Shapes -- Analysis and Applications to Model Synthesis,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{PhDThesis2011Bokeloh, TITLE = {Symmetry in {3D} Shapes -- Analysis and Applications to Model Synthesis}, AUTHOR = {Bokeloh, Martin}, LANGUAGE = {eng}, URL = {hurn:nbn:de:bsz:291-scidok-45137}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Bokeloh, Martin %Y Seidel, Hans-Peter %A referee: Wand, Michael %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Symmetry in 3D Shapes -- Analysis and Applications to Model Synthesis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-119D-2 %F EDOC: 618879 %U hurn:nbn:de:bsz:291-scidok-45137 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 134 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2011/4513/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[115]
K. Bozek, “Analysis of HIV-host interaction on different scales,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{BozekDiss2011, TITLE = {Analysis of {HIV}-host interaction on different scales}, AUTHOR = {Bozek, Katarzyna}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2012/4529/}, LOCALID = {Local-ID: C125673F004B2D7B-D454DEBABE060435C125798100144241-BozekDiss2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Bozek, Katarzyna %Y Lengauer, Thomas %A referee: L&#228;ssig, Michael %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Analysis of HIV-host interaction on different scales : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1197-E %F EDOC: 618820 %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4529/ %F OTHER: Local-ID: C125673F004B2D7B-D454DEBABE060435C125798100144241-BozekDiss2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P VII, 171 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4529/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[116]
L. Dietz, “Exploiting graph-structured data in generative probabilistic models,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
Unsupervised machine learning aims to make predictions when labeled data is absent, and thus, supervised machine learning cannot be applied. These algorithms build on assumptions about how data and predictions relate to each other. One technique for unsupervised problem settings are generative models, which specify the set of assumptions as a probabilistic process that generates the data. The subject of this thesis is how to most effectively exploit input data that has an underlying graph structure in unsupervised learning for three important use cases. The first use case deals with localizing defective code regions in software, given the execution graph of code lines and transitions. Citation networks are exploited in the next use case to quantify the influence of citations on the content of the citing publication. In the final use case, shared tastes of friends in a social network are identified, enabling the prediction of items from a user a particular friend of his would be interested in. For each use case, prediction performance is evaluated via held-out test data that is only scarcely available in the domain. This comparison quantifies under which circumstances each generative model best exploits the given graph structure.
Export
BibTeX
@phdthesis{DietzDiss2011, TITLE = {Exploiting graph-structured data in generative probabilistic models}, AUTHOR = {Dietz, Laura}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-529F72EB032DBD41C125782B005898A7-DietzDiss2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Unsupervised machine learning aims to make predictions when labeled data is absent, and thus, supervised machine learning cannot be applied. These algorithms build on assumptions about how data and predictions relate to each other. One technique for unsupervised problem settings are generative models, which specify the set of assumptions as a probabilistic process that generates the data. The subject of this thesis is how to most effectively exploit input data that has an underlying graph structure in unsupervised learning for three important use cases. The first use case deals with localizing defective code regions in software, given the execution graph of code lines and transitions. Citation networks are exploited in the next use case to quantify the influence of citations on the content of the citing publication. In the final use case, shared tastes of friends in a social network are identified, enabling the prediction of items from a user a particular friend of his would be interested in. For each use case, prediction performance is evaluated via held-out test data that is only scarcely available in the domain. This comparison quantifies under which circumstances each generative model best exploits the given graph structure.}, }
Endnote
%0 Thesis %A Dietz, Laura %Y Scheffer, Tobias %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Machine Learning, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Exploiting graph-structured data in generative probabilistic models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11A2-3 %F EDOC: 618939 %F OTHER: Local-ID: C1256DBF005F876D-529F72EB032DBD41C125782B005898A7-DietzDiss2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 181 p. %V phd %9 phd %X Unsupervised machine learning aims to make predictions when labeled data is absent, and thus, supervised machine learning cannot be applied. These algorithms build on assumptions about how data and predictions relate to each other. One technique for unsupervised problem settings are generative models, which specify the set of assumptions as a probabilistic process that generates the data. The subject of this thesis is how to most effectively exploit input data that has an underlying graph structure in unsupervised learning for three important use cases. The first use case deals with localizing defective code regions in software, given the execution graph of code lines and transitions. Citation networks are exploited in the next use case to quantify the influence of citations on the content of the citing publication. In the final use case, shared tastes of friends in a social network are identified, enabling the prediction of items from a user a particular friend of his would be interested in. For each use case, prediction performance is evaluated via held-out test data that is only scarcely available in the domain. This comparison quantifies under which circumstances each generative model best exploits the given graph structure.
[117]
Z. Dong, “Visually Pleasing Real-time Global Illumination Rendering for Fully-dynamic Scenes,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{PhDThesis2011Dong, TITLE = {Visually Pleasing Real-time Global Illumination Rendering for Fully-dynamic Scenes}, AUTHOR = {Dong, Zhao}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-36809}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Dong, Zhao %Y Seidel, Hans-Peter %A referee: Kautz, Jan %A referee: Grosch, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Visually Pleasing Real-time Global Illumination Rendering for Fully-dynamic Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11AB-2 %F EDOC: 618880 %U urn:nbn:de:bsz:291-scidok-36809 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 168 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2011/3680/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[118]
D. Emig, “Novel analysis approaches to context-dependent molecular networks,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{Emig2011, TITLE = {Novel analysis approaches to context-dependent molecular networks}, AUTHOR = {Emig, Dorothea}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2011/3843/}, LOCALID = {Local-ID: C125673F004B2D7B-E1ADC0DDC6CE7F89C12579A5000A2C91-Emig2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Emig, Dorothea %Y Albrecht, Mario %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Novel analysis approaches to context-dependent molecular networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11B7-6 %F EDOC: 618841 %U http://scidok.sulb.uni-saarland.de/volltexte/2011/3843/ %F OTHER: Local-ID: C125673F004B2D7B-E1ADC0DDC6CE7F89C12579A5000A2C91-Emig2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P XVII, 144 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2011/3843/
[119]
L. Kunert, “Maximal Common Subgraph DAGs: Theory and Application to Virtual Screening in Drug Development,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{Kunert2010a, TITLE = {Maximal Common Subgraph {DAG}s: Theory and Application to Virtual Screening in Drug Development}, AUTHOR = {Kunert, Lars}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125673F004B2D7B-A739E601E6423A92C12578340040221B-Kunert2010a}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Kunert, Lars %A referee: Mehlhorn, Kurt %Y Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Maximal Common Subgraph DAGs: Theory and Application to Virtual Screening in Drug Development : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11B5-A %F EDOC: 618805 %F OTHER: Local-ID: C125673F004B2D7B-A739E601E6423A92C12578340040221B-Kunert2010a %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V phd %9 phd
[120]
Z. Li, “Multicast MAC Extensions for High Rate Real-Time Traffic in Wireless LANs,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
Nowadays we are rapidly moving from a mainly textual-based to a multimedia-based Internet, for which the widely deployed IEEE 802.11 wireless LANs can be one of the promising candidates to make them available to users anywhere, anytime, on any device. However, it is still a challenge to support group-oriented real-time multimedia services, such as video-on-demand, video conferencing, distance educations, mobile entertainment services, interactive games, etc., in wireless LANs, as the current protocols do not support multicast, in particular they just send multicast packets in open-loop as broadcast packets, i.e., without any possible acknowledgements or retransmissions. In this thesis, we focus on MAC layer reliable multicast approaches which outperform upper layer ones with both shorter delays and higher efficiencies. Different from polling based approaches, which suffer from long delays, low scalabilities and low efficiencies, we explore a feedback jamming mechanism where negative acknowledgement (NACK) frames are allowed from the non-leader receivers to destroy the acknowledgement (ACK) frame from the single leader receiver and prompts retransmissions from the sender. Based on the feedback jamming scheme, we propose two MAC layer multicast error correction protocols, SEQ driven Leader Based Protocol (SEQ-LBP) and Hybrid Leader Based Protocol (HLBP), the former is an Automatic Repeat reQuest (ARQ) scheme while the later combines both ARQ and the packet level Forward Error Correction (FEC). We evaluate the feedback jamming probabilities and the performances of SEQ-LBP and HLBP based on theoretical analyses, NS-2 simulations and experiments on a real test-bed built with consumer wireless LAN cards. Test results confirm the feasibility of the feedback jamming scheme and the outstanding performances of the proposed protocols SEQ-LBP and HLBP, in particular SEQ-LBP is good for small multicast groups due to its short delay, effectiveness and simplicity while HLBP is better for large multicast groups because of its high efficiency and high scalability with respect to the number of receivers per group.
Export
BibTeX
@phdthesis{Li2011, TITLE = {Multicast {MAC} Extensions for High Rate Real-Time Traffic in Wireless {LANs}}, AUTHOR = {Li, Zhao}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Nowadays we are rapidly moving from a mainly textual-based to a multimedia-based Internet, for which the widely deployed IEEE 802.11 wireless LANs can be one of the promising candidates to make them available to users anywhere, anytime, on any device. However, it is still a challenge to support group-oriented real-time multimedia services, such as video-on-demand, video conferencing, distance educations, mobile entertainment services, interactive games, etc., in wireless LANs, as the current protocols do not support multicast, in particular they just send multicast packets in open-loop as broadcast packets, i.e., without any possible acknowledgements or retransmissions. In this thesis, we focus on MAC layer reliable multicast approaches which outperform upper layer ones with both shorter delays and higher efficiencies. Different from polling based approaches, which suffer from long delays, low scalabilities and low efficiencies, we explore a feedback jamming mechanism where negative acknowledgement (NACK) frames are allowed from the non-leader receivers to destroy the acknowledgement (ACK) frame from the single leader receiver and prompts retransmissions from the sender. Based on the feedback jamming scheme, we propose two MAC layer multicast error correction protocols, SEQ driven Leader Based Protocol (SEQ-LBP) and Hybrid Leader Based Protocol (HLBP), the former is an Automatic Repeat reQuest (ARQ) scheme while the later combines both ARQ and the packet level Forward Error Correction (FEC). We evaluate the feedback jamming probabilities and the performances of SEQ-LBP and HLBP based on theoretical analyses, NS-2 simulations and experiments on a real test-bed built with consumer wireless LAN cards. Test results confirm the feasibility of the feedback jamming scheme and the outstanding performances of the proposed protocols SEQ-LBP and HLBP, in particular SEQ-LBP is good for small multicast groups due to its short delay, effectiveness and simplicity while HLBP is better for large multicast groups because of its high efficiency and high scalability with respect to the number of receivers per group.}, }
Endnote
%0 Thesis %A Li, Zhao %Y Herfet, Thorsten %A referee: Kays, R&#252;diger %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Multicast MAC Extensions for High Rate Real-Time Traffic in Wireless LANs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A7E7-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V phd %9 phd %X Nowadays we are rapidly moving from a mainly textual-based to a multimedia-based Internet, for which the widely deployed IEEE 802.11 wireless LANs can be one of the promising candidates to make them available to users anywhere, anytime, on any device. However, it is still a challenge to support group-oriented real-time multimedia services, such as video-on-demand, video conferencing, distance educations, mobile entertainment services, interactive games, etc., in wireless LANs, as the current protocols do not support multicast, in particular they just send multicast packets in open-loop as broadcast packets, i.e., without any possible acknowledgements or retransmissions. In this thesis, we focus on MAC layer reliable multicast approaches which outperform upper layer ones with both shorter delays and higher efficiencies. Different from polling based approaches, which suffer from long delays, low scalabilities and low efficiencies, we explore a feedback jamming mechanism where negative acknowledgement (NACK) frames are allowed from the non-leader receivers to destroy the acknowledgement (ACK) frame from the single leader receiver and prompts retransmissions from the sender. Based on the feedback jamming scheme, we propose two MAC layer multicast error correction protocols, SEQ driven Leader Based Protocol (SEQ-LBP) and Hybrid Leader Based Protocol (HLBP), the former is an Automatic Repeat reQuest (ARQ) scheme while the later combines both ARQ and the packet level Forward Error Correction (FEC). We evaluate the feedback jamming probabilities and the performances of SEQ-LBP and HLBP based on theoretical analyses, NS-2 simulations and experiments on a real test-bed built with consumer wireless LAN cards. Test results confirm the feasibility of the feedback jamming scheme and the outstanding performances of the proposed protocols SEQ-LBP and HLBP, in particular SEQ-LBP is good for small multicast groups due to its short delay, effectiveness and simplicity while HLBP is better for large multicast groups because of its high efficiency and high scalability with respect to the number of receivers per group. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2011/4368/
[121]
M. Manjunath, “A Riemann-Roch Theory for Sublattices of the Root Lattice A n, Graph Automorphisms and Counting Cycles in Graphs,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
This thesis consists of two independent parts. In the rst part of the thesis, we develop a Riemann-Roch theory for sublattices of the root lattice An extending the work of Baker and Norine (Advances in Mathematics, 215(2): 766-788, 2007) and study questions that arise from this theory. Our theory is based on the study of critical points of a certain simplicial distance function on a lattice and establishes connections between the Riemann-Roch theory and the Voronoi diagrams of lattices under certain simplicial distance functions. In particular, we provide a new geometric approach for the study of the Laplacian of graphs. As a consequence, we obtain a geometric proof of the Riemann-Roch theorem for graphs and generalise the result to other sub-lattices of An. Furthermore, we use the geometric approach to study the problem of computing the rank of a divisor on a finite multigraph G to obtain an algorithm that runs in polynomial time for a fiixed number of vertices, in particular with running time 2O(n log n)poly(size(G)) where n is the number of vertices of G. Motivated by this theory, we study a dimensionality reduction approach to the graph automorphism problem and we also obtain an algorithm for the related problem of counting automorphisms of graphs that is based on exponential sums. In the second part of the thesis, we develop an approach, based on complex-valued hash functions, to count cycles in graphs in the data streaming model. Our algorithm is based on the idea of computing instances of complex-valued random variables over the given stream and improves drastically upon the nave sampling algorithm.
Export
BibTeX
@phdthesis{ManjunathPhd2011, TITLE = {A {R}iemann-{R}och Theory for Sublattices of the Root Lattice A n, Graph Automorphisms and Counting Cycles in Graphs}, AUTHOR = {Manjunath, Madhusudan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {This thesis consists of two independent parts. In the rst part of the thesis, we develop a Riemann-Roch theory for sublattices of the root lattice An extending the work of Baker and Norine (Advances in Mathematics, 215(2): 766-788, 2007) and study questions that arise from this theory. Our theory is based on the study of critical points of a certain simplicial distance function on a lattice and establishes connections between the Riemann-Roch theory and the Voronoi diagrams of lattices under certain simplicial distance functions. In particular, we provide a new geometric approach for the study of the Laplacian of graphs. As a consequence, we obtain a geometric proof of the Riemann-Roch theorem for graphs and generalise the result to other sub-lattices of An. Furthermore, we use the geometric approach to study the problem of computing the rank of a divisor on a finite multigraph G to obtain an algorithm that runs in polynomial time for a fiixed number of vertices, in particular with running time 2O(n log n)poly(size(G)) where n is the number of vertices of G. Motivated by this theory, we study a dimensionality reduction approach to the graph automorphism problem and we also obtain an algorithm for the related problem of counting automorphisms of graphs that is based on exponential sums. In the second part of the thesis, we develop an approach, based on complex-valued hash functions, to count cycles in graphs in the data streaming model. Our algorithm is based on the idea of computing instances of complex-valued random variables over the given stream and improves drastically upon the nave sampling algorithm.}, }
Endnote
%0 Thesis %A Manjunath, Madhusudan %Y Mehlhorn, Kurt %A referee: Haase, Christian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T A Riemann-Roch Theory for Sublattices of the Root Lattice A n, Graph Automorphisms and Counting Cycles in Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A7FB-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V phd %9 phd %X This thesis consists of two independent parts. In the rst part of the thesis, we develop a Riemann-Roch theory for sublattices of the root lattice An extending the work of Baker and Norine (Advances in Mathematics, 215(2): 766-788, 2007) and study questions that arise from this theory. Our theory is based on the study of critical points of a certain simplicial distance function on a lattice and establishes connections between the Riemann-Roch theory and the Voronoi diagrams of lattices under certain simplicial distance functions. In particular, we provide a new geometric approach for the study of the Laplacian of graphs. As a consequence, we obtain a geometric proof of the Riemann-Roch theorem for graphs and generalise the result to other sub-lattices of An. Furthermore, we use the geometric approach to study the problem of computing the rank of a divisor on a finite multigraph G to obtain an algorithm that runs in polynomial time for a fiixed number of vertices, in particular with running time 2O(n log n)poly(size(G)) where n is the number of vertices of G. Motivated by this theory, we study a dimensionality reduction approach to the graph automorphism problem and we also obtain an algorithm for the related problem of counting automorphisms of graphs that is based on exponential sums. In the second part of the thesis, we develop an approach, based on complex-valued hash functions, to count cycles in graphs in the data streaming model. Our algorithm is based on the idea of computing instances of complex-valued random variables over the given stream and improves drastically upon the nave sampling algorithm. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4781/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[122]
I. Rauf, “Polynomially Solvable Cases of Hypergraph Transversal and Related Problems,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
This thesis is mainly concerned with the hypergraph transversal problem, which asks to generate all minimal transversals of a given hypergraph. While the current best upper bound on the complexity of the problem is quasi-polynomial in the combined input and output sizes, it is shown to be solvable in output polynomial time for a number of hypergraph classes. We extend this polynomial frontier to the hypergraphs induced by hyperplanes and constant-sided polytopes in fixed dimension R^d and hypergraphs for which every minimal transversal and hyperedge intersection is bounded. We also show the problem to be fixed parameter tractable with respect to the minimum integer k such that the input hypergraph is k-degenerate, and also with respect to its maximum complementary degree. Whereas we improve the known bounds when the parameter is the maximum degree of a hypergraph. We also study the readability of a monotone Boolean function which is defined as the minimum integer r such that it can be represented by an AND-OR-formula with every variable occurrence is bounded by r. We prove that it is NP-hard to approximate the readability of even a depth three Boolean formula. We also give tight sublinear upper bounds on the readability of a monotone Boolean function given in CNF (or DNF) form, parameterized by the number of terms in the CNF and the maximum number of variables in the intersection of any constant number of terms. For interval DNF's we give much tighter logarithmic bounds on the readability. Finally, we discuss an implementation of a quasi-polynomial algorithm for the hypergraph transversal problem that runs in polynomial space. We found our implementation to be competitive with all but one previous implementation on various datasets.
Export
BibTeX
@phdthesis{Rauf2011, TITLE = {Polynomially Solvable Cases of Hypergraph Transversal and Related Problems}, AUTHOR = {Rauf, Imran}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {This thesis is mainly concerned with the hypergraph transversal problem, which asks to generate all minimal transversals of a given hypergraph. While the current best upper bound on the complexity of the problem is quasi-polynomial in the combined input and output sizes, it is shown to be solvable in output polynomial time for a number of hypergraph classes. We extend this polynomial frontier to the hypergraphs induced by hyperplanes and constant-sided polytopes in fixed dimension R^d and hypergraphs for which every minimal transversal and hyperedge intersection is bounded. We also show the problem to be fixed parameter tractable with respect to the minimum integer k such that the input hypergraph is k-degenerate, and also with respect to its maximum complementary degree. Whereas we improve the known bounds when the parameter is the maximum degree of a hypergraph. We also study the readability of a monotone Boolean function which is defined as the minimum integer r such that it can be represented by an AND-OR-formula with every variable occurrence is bounded by r. We prove that it is NP-hard to approximate the readability of even a depth three Boolean formula. We also give tight sublinear upper bounds on the readability of a monotone Boolean function given in CNF (or DNF) form, parameterized by the number of terms in the CNF and the maximum number of variables in the intersection of any constant number of terms. For interval DNF's we give much tighter logarithmic bounds on the readability. Finally, we discuss an implementation of a quasi-polynomial algorithm for the hypergraph transversal problem that runs in polynomial space. We found our implementation to be competitive with all but one previous implementation on various datasets.}, }
Endnote
%0 Thesis %A Rauf, Imran %Y Mehlhorn, Kurt %A referee: Boros, Endre %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Polynomially Solvable Cases of Hypergraph Transversal and Related Problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-AB0E-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V phd %9 phd %X This thesis is mainly concerned with the hypergraph transversal problem, which asks to generate all minimal transversals of a given hypergraph. While the current best upper bound on the complexity of the problem is quasi-polynomial in the combined input and output sizes, it is shown to be solvable in output polynomial time for a number of hypergraph classes. We extend this polynomial frontier to the hypergraphs induced by hyperplanes and constant-sided polytopes in fixed dimension R^d and hypergraphs for which every minimal transversal and hyperedge intersection is bounded. We also show the problem to be fixed parameter tractable with respect to the minimum integer k such that the input hypergraph is k-degenerate, and also with respect to its maximum complementary degree. Whereas we improve the known bounds when the parameter is the maximum degree of a hypergraph. We also study the readability of a monotone Boolean function which is defined as the minimum integer r such that it can be represented by an AND-OR-formula with every variable occurrence is bounded by r. We prove that it is NP-hard to approximate the readability of even a depth three Boolean formula. We also give tight sublinear upper bounds on the readability of a monotone Boolean function given in CNF (or DNF) form, parameterized by the number of terms in the CNF and the maximum number of variables in the intersection of any constant number of terms. For interval DNF's we give much tighter logarithmic bounds on the readability. Finally, we discuss an implementation of a quasi-polynomial algorithm for the hypergraph transversal problem that runs in polynomial space. We found our implementation to be competitive with all but one previous implementation on various datasets. %U http://scidok.sulb.uni-saarland.de/volltexte/2011/4471/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[123]
A. Tevs, “Deformable Shape Matching,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{ArtDiss2011, TITLE = {Deformable Shape Matching}, AUTHOR = {Tevs, Art}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-45768}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Tevs, Art %Y Seidel, Hans-Peter %A referee: Ihrke, Ivo %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Deformable Shape Matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1192-7 %F EDOC: 618909 %U urn:nbn:de:bsz:291-scidok-45768 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 201 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2012/4576/
[124]
A. Thielen, “Genotypic Analysis of HIV-1 Coreceptor Usage,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
The acquired immunodeficiency syndrome (AIDS) is one of the biggest medical challenges in the world today. Its causative pathogen, the human immunodeficiency virus (HIV), is responsible for millions of deaths per year. Although about two dozen antiviral drugs are currently available, progression of the disease can only be delayed but patients cannot be cured. In recent years, the new class of coreceptor antagonists has been added to the arsenal of antiretroviral drugs. These drugs block viral cell-entry by binding to one of the receptors the virus requires for infection of a cell. However, some HIV variants can also use another coreceptor so that coreceptor usage has to be tested before administration of the drug. This thesis analyzes the use of statistical learning methods to infer HIV coreceptor usage from viral genotype. Improvements over existing methods are achieved by using sequence information of so far not used genomic regions, next generation sequencing technologies, and by combining different existing prediction systems. In addition, HIV coreceptor usage prediction is analyzed with respect to clinical outcome in patients treated with coreceptor antagonists. The results demonstrate that inferring HIV coreceptor usage from viral genotype can be reliably used in daily routine.
Export
BibTeX
@phdthesis{Thielen2011diss, TITLE = {Genotypic Analysis of {HIV}-1 Coreceptor Usage}, AUTHOR = {Thielen, Alexander}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2011/4034/}, LOCALID = {Local-ID: C125673F004B2D7B-D95B13CBBC2F24FCC12579A30050374D-Thielen2011diss}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {The acquired immunodeficiency syndrome (AIDS) is one of the biggest medical challenges in the world today. Its causative pathogen, the human immunodeficiency virus (HIV), is responsible for millions of deaths per year. Although about two dozen antiviral drugs are currently available, progression of the disease can only be delayed but patients cannot be cured. In recent years, the new class of coreceptor antagonists has been added to the arsenal of antiretroviral drugs. These drugs block viral cell-entry by binding to one of the receptors the virus requires for infection of a cell. However, some HIV variants can also use another coreceptor so that coreceptor usage has to be tested before administration of the drug. This thesis analyzes the use of statistical learning methods to infer HIV coreceptor usage from viral genotype. Improvements over existing methods are achieved by using sequence information of so far not used genomic regions, next generation sequencing technologies, and by combining different existing prediction systems. In addition, HIV coreceptor usage prediction is analyzed with respect to clinical outcome in patients treated with coreceptor antagonists. The results demonstrate that inferring HIV coreceptor usage from viral genotype can be reliably used in daily routine.}, }
Endnote
%0 Thesis %A Thielen, Alexander %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Genotypic Analysis of HIV-1 Coreceptor Usage : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11AD-D %F EDOC: 618839 %U http://scidok.sulb.uni-saarland.de/volltexte/2011/4034/ %F OTHER: Local-ID: C125673F004B2D7B-D95B13CBBC2F24FCC12579A30050374D-Thielen2011diss %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 184 p. %V phd %9 phd %X The acquired immunodeficiency syndrome (AIDS) is one of the biggest medical challenges in the world today. Its causative pathogen, the human immunodeficiency virus (HIV), is responsible for millions of deaths per year. Although about two dozen antiviral drugs are currently available, progression of the disease can only be delayed but patients cannot be cured. In recent years, the new class of coreceptor antagonists has been added to the arsenal of antiretroviral drugs. These drugs block viral cell-entry by binding to one of the receptors the virus requires for infection of a cell. However, some HIV variants can also use another coreceptor so that coreceptor usage has to be tested before administration of the drug. This thesis analyzes the use of statistical learning methods to infer HIV coreceptor usage from viral genotype. Improvements over existing methods are achieved by using sequence information of so far not used genomic regions, next generation sequencing technologies, and by combining different existing prediction systems. In addition, HIV coreceptor usage prediction is analyzed with respect to clinical outcome in patients treated with coreceptor antagonists. The results demonstrate that inferring HIV coreceptor usage from viral genotype can be reliably used in daily routine. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2011/4034/
[125]
C. Winzen, “Toward a complexity theory for randomized search heuristics : black box models,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{WinzenDiss2011, TITLE = {Toward a complexity theory for randomized search heuristics : black box models}, AUTHOR = {Winzen, Carola}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2011/4534/}, LOCALID = {Local-ID: C1256428004B93B8-F36A76D7E98548BFC12579A000398864-WinzenDiss2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Winzen, Carola %Y Mehlhorn, Kurt %A referee: Bl&#228;ser, Markus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Toward a complexity theory for randomized search heuristics : black box models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1194-3 %F EDOC: 618740 %U http://scidok.sulb.uni-saarland.de/volltexte/2011/4534/ %F OTHER: Local-ID: C1256428004B93B8-F36A76D7E98548BFC12579A000398864-WinzenDiss2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 171p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2011/4534/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[126]
G. Ziegler, “GPU Data Structures for Graphics and Vision,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@phdthesis{PhDThesis2011Ziegler, TITLE = {{GPU} Data Structures for Graphics and Vision}, AUTHOR = {Ziegler, Gernot}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-39699}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Ziegler, Gernot %Y Seidel, Hans-Peter %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T GPU Data Structures for Graphics and Vision : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11B3-E %F EDOC: 618881 %U urn:nbn:de:bsz:291-scidok-39699 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 176 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2011/3969/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2010
[127]
S. Ali, “Semantic Interoperability of Ambient Intelligent Medical Devices and e-Health Systems,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{AliPhD2010, TITLE = {Semantic Interoperability of Ambient Intelligent Medical Devices and e-Health Systems}, AUTHOR = {Ali, Safdar}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-03CC30E8E9C6F3B8C125783A0039584F-AliPhD2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Ali, Safdar %Y Fuhr, G&#252;nther %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Semantic Interoperability of Ambient Intelligent Medical Devices and e-Health Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1492-3 %F EDOC: 536421 %F OTHER: Local-ID: C1256DBF005F876D-03CC30E8E9C6F3B8C125783A0039584F-AliPhD2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/2963/
[128]
A. Altmann, “Bioinformatical Approaches to Ranking of anti-HIV Combination Therapies and Planning of Treatment Schedule,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{Altmann2010, TITLE = {Bioinformatical Approaches to Ranking of anti-{HIV} Combination Therapies and Planning of Treatment Schedule}, AUTHOR = {Altmann, Andr{\'e}}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-32135}, LOCALID = {Local-ID: C125673F004B2D7B-141A5B192249AA0DC12578230059F631-Altmann2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Altmann, Andr&#233; %A referee: Rahnenf&#252;hrer, J&#246;rg %A referee: Lenhof, Hans-Peter %Y Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Bioinformatical Approaches to Ranking of anti-HIV Combination Therapies and Planning of Treatment Schedule : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-146B-C %F EDOC: 536657 %F OTHER: Local-ID: C125673F004B2D7B-141A5B192249AA0DC12578230059F631-Altmann2010 %U urn:nbn:de:bsz:291-scidok-32135 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3213/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[129]
T. O. Aydin, “Human Visual System Models in Computer Graphics,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{AydinPhD2010, TITLE = {Human Visual System Models in Computer Graphics}, AUTHOR = {Aydin, Tunc Ozan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Aydin, Tunc Ozan %Y Myszkowski, Karol %A referee: Slusallek, Philipp %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Human Visual System Models in Computer Graphics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1435-4 %F EDOC: 537276 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %P XIV, 134 p. %V phd %9 phd
[130]
K. Berberich, “Temporal Search in Web Archives,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Web archives include both archives of contents originally published on the Web (e.g., the Internet Archive) but also archives of contents published long ago that are now accessible on the Web (e.g., the archive of The Times). Thanks to the increased awareness that web-born contents are worth preserving and to improved digitization techniques, web archives have grown in number and size. To unfold their full potential, search techniques are needed that consider their inherent special characteristics. This work addresses three important problems toward this objective and makes the following contributions: * We present the Time-Travel Inverted indeX (TTIX) as an efficient solution to time-travel text search in web archives, allowing users to search only the parts of the web archive that existed at a user's time of interest. * To counter negative effects that terminology evolution has on the quality of search results in web archives, we propose a novel query-reformulation technique, so that old but highly relevant documents are retrieved in response to today's queries. * For temporal information needs, for which the user is best satisfied by documents that refer to particular times, we describe a retrieval model that integrates temporal expressions (e.g., ``in the 1990s'') seamlessly into a language modeling approach. Experiments for each of the proposed methods show their efficiency and effectiveness, respectively, and demonstrate the viability of our approach to search in web archives.
Export
BibTeX
@phdthesis{Berberich2010, TITLE = {Temporal Search in Web Archives}, AUTHOR = {Berberich, Klaus}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2010/3281/}, LOCALID = {Local-ID: C1256DBF005F876D-05A4D1CFDEB5957FC125776E002452A5-Berberich2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Web archives include both archives of contents originally published on the Web (e.g., the Internet Archive) but also archives of contents published long ago that are now accessible on the Web (e.g., the archive of The Times). Thanks to the increased awareness that web-born contents are worth preserving and to improved digitization techniques, web archives have grown in number and size. To unfold their full potential, search techniques are needed that consider their inherent special characteristics. This work addresses three important problems toward this objective and makes the following contributions: * We present the Time-Travel Inverted indeX (TTIX) as an efficient solution to time-travel text search in web archives, allowing users to search only the parts of the web archive that existed at a user's time of interest. * To counter negative effects that terminology evolution has on the quality of search results in web archives, we propose a novel query-reformulation technique, so that old but highly relevant documents are retrieved in response to today's queries. * For temporal information needs, for which the user is best satisfied by documents that refer to particular times, we describe a retrieval model that integrates temporal expressions (e.g., ``in the 1990s'') seamlessly into a language modeling approach. Experiments for each of the proposed methods show their efficiency and effectiveness, respectively, and demonstrate the viability of our approach to search in web archives.}, }
Endnote
%0 Thesis %A Berberich, Klaus %Y Weikum, Gerhard %A referee: Seeger, Bernhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Temporal Search in Web Archives : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1456-9 %F EDOC: 536373 %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3281/ %F OTHER: Local-ID: C1256DBF005F876D-05A4D1CFDEB5957FC125776E002452A5-Berberich2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %X Web archives include both archives of contents originally published on the Web (e.g., the Internet Archive) but also archives of contents published long ago that are now accessible on the Web (e.g., the archive of The Times). Thanks to the increased awareness that web-born contents are worth preserving and to improved digitization techniques, web archives have grown in number and size. To unfold their full potential, search techniques are needed that consider their inherent special characteristics. This work addresses three important problems toward this objective and makes the following contributions: * We present the Time-Travel Inverted indeX (TTIX) as an efficient solution to time-travel text search in web archives, allowing users to search only the parts of the web archive that existed at a user's time of interest. * To counter negative effects that terminology evolution has on the quality of search results in web archives, we propose a novel query-reformulation technique, so that old but highly relevant documents are retrieved in response to today's queries. * For temporal information needs, for which the user is best satisfied by documents that refer to particular times, we describe a retrieval model that integrates temporal expressions (e.g., ``in the 1990s'') seamlessly into a language modeling approach. Experiments for each of the proposed methods show their efficiency and effectiveness, respectively, and demonstrate the viability of our approach to search in web archives. %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3281/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[131]
G. de Melo, “Graph-Based Methods for Large-Scale Multilingual Knowledge Integration,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{deMeloPhD2010, TITLE = {Graph-Based Methods for Large-Scale Multilingual Knowledge Integration}, AUTHOR = {de Melo, Gerard}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-548EE6FE4AD82F0EC125783A0052B317-deMeloPhD2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A de Melo, Gerard %Y Weikum, Gerhard %A referee: Uszkoreit, Hans %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Graph-Based Methods for Large-Scale Multilingual Knowledge Integration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1431-C %F EDOC: 536407 %F OTHER: Local-ID: C1256DBF005F876D-548EE6FE4AD82F0EC125783A0052B317-deMeloPhD2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2011/4300/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[132]
R. Harren, “Two-dimensional packing problems,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{HarrenPhD2010, TITLE = {Two-dimensional packing problems}, AUTHOR = {Harren, Rolf}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2010/3470/}, LOCALID = {Local-ID: C1256428004B93B8-65F299DCA07EFC87C125783A005410C0-HarrenPhD2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Harren, Rolf %Y Weikum, Gerhard %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Two-dimensional packing problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-143E-1 %F EDOC: 536708 %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3470/ %F OTHER: Local-ID: C1256428004B93B8-65F299DCA07EFC87C125783A005410C0-HarrenPhD2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/3470/
[133]
N. Hasler, “Modelling Human Pose and Shape Based on a Database of Human 3D Scans,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Generating realistic human shapes and motion is an important task both in the motion picture industry and in computer games. In feature films, high quality and believability are the most important characteristics. Additionally, when creating virtual doubles the generated charactes have to match as closely as possible to given real persons. In contrast, in computer games the level of realism does not need to be as high but real-time performance is essential. It is desirable to meet all these requirements with a general model of human pose and shape. In addition, many markerless human tracking methods applied, e.g., in biomedicine or sports science can benefit greatly from the availability of such a model because most methods require a 3D model of the tracked subject as input, which can be generated on-the-fly given a suitable shape and pose model. In this thesis, a comprehensive procedure is presented to generate different general models of human pose. A database of 3D scans spanning the space of human pose and shape variations is introduced. Then, four different approaches for transforming the database into a general model of human pose and shape are presented, which improve the current state of the art. Experiments are performed to evaluate and compare the proposed models on real-world problems, i.e., characters are generated given semantic constraints and the underlying shape and pose of humans given 3D scans, multi-view video, or uncalibrated monocular images is estimated.
Export
BibTeX
@phdthesis{HaslerPhD2010, TITLE = {Modelling Human Pose and Shape Based on a Database of Human {3D} Scans}, AUTHOR = {Hasler, Nils}, LANGUAGE = {eng}, URL = {http://surn:nbn:de:bsz:291-scidok-32795}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Generating realistic human shapes and motion is an important task both in the motion picture industry and in computer games. In feature films, high quality and believability are the most important characteristics. Additionally, when creating virtual doubles the generated charactes have to match as closely as possible to given real persons. In contrast, in computer games the level of realism does not need to be as high but real-time performance is essential. It is desirable to meet all these requirements with a general model of human pose and shape. In addition, many markerless human tracking methods applied, e.g., in biomedicine or sports science can benefit greatly from the availability of such a model because most methods require a 3D model of the tracked subject as input, which can be generated on-the-fly given a suitable shape and pose model. In this thesis, a comprehensive procedure is presented to generate different general models of human pose. A database of 3D scans spanning the space of human pose and shape variations is introduced. Then, four different approaches for transforming the database into a general model of human pose and shape are presented, which improve the current state of the art. Experiments are performed to evaluate and compare the proposed models on real-world problems, i.e., characters are generated given semantic constraints and the underlying shape and pose of humans given 3D scans, multi-view video, or uncalibrated monocular images is estimated.}, }
Endnote
%0 Thesis %A Hasler, Nils %Y Seidel, Hans-Peter %A referee: Rosenhahn, Bodo %A referee: Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Modelling Human Pose and Shape Based on a Database of Human 3D Scans : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1465-7 %F EDOC: 537275 %U http://surn:nbn:de:bsz:291-scidok-32795 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %P VIII, 95 p. %V phd %9 phd %X Generating realistic human shapes and motion is an important task both in the motion picture industry and in computer games. In feature films, high quality and believability are the most important characteristics. Additionally, when creating virtual doubles the generated charactes have to match as closely as possible to given real persons. In contrast, in computer games the level of realism does not need to be as high but real-time performance is essential. It is desirable to meet all these requirements with a general model of human pose and shape. In addition, many markerless human tracking methods applied, e.g., in biomedicine or sports science can benefit greatly from the availability of such a model because most methods require a 3D model of the tracked subject as input, which can be generated on-the-fly given a suitable shape and pose model. In this thesis, a comprehensive procedure is presented to generate different general models of human pose. A database of 3D scans spanning the space of human pose and shape variations is introduced. Then, four different approaches for transforming the database into a general model of human pose and shape are presented, which improve the current state of the art. Experiments are performed to evaluate and compare the proposed models on real-world problems, i.e., characters are generated given semantic constraints and the underlying shape and pose of humans given 3D scans, multi-view video, or uncalibrated monocular images is estimated. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/3279/
[134]
R. Herzog, “Exploiting Coherence in Lighting and Shading Computations,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Computing global illumination (GI) in virtual scenes becomes increasingly attractive even for real-time applications nowadays. GI delivers important cues in the perception of 3D virtual scenes, which is important for material and architectural design. Therefore, for photo-realistic rendering in the design and even the game industry, GI has become indispensable. While the computer simulation of realistic global lighting is well-studied and often considered as solved, computing it efficiently is not. Saving computation costs is therefore the main motivation of current research in GI. Efficient algorithms have to take various aspects into account, such as the algorithmic complexity and convergence, its mapping to parallel processing hardware, and the knowledge of certain lighting properties including the capabilities of the human visual system. In this dissertation we exploit both low-level and high-level coherence in the practical design of GI algorithms for a variety of target applications ranging from high-quality production rendering to dynamic real-time rendering. We also focus on automatic rendering-accuracy control to approximate GI in such a way that the error is perceptually unified in the result images, thereby taking not only into account the limitations of the human visual system but also later video compression with an MPEG encoder. In addition, this dissertation provides many ideas and supplementary material, which complements published work and could be of practical relevance.
Export
BibTeX
@phdthesis{HerzogDiss2010, TITLE = {Exploiting Coherence in Lighting and Shading Computations}, AUTHOR = {Herzog, Robert}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Computing global illumination (GI) in virtual scenes becomes increasingly attractive even for real-time applications nowadays. GI delivers important cues in the perception of 3D virtual scenes, which is important for material and architectural design. Therefore, for photo-realistic rendering in the design and even the game industry, GI has become indispensable. While the computer simulation of realistic global lighting is well-studied and often considered as solved, computing it efficiently is not. Saving computation costs is therefore the main motivation of current research in GI. Efficient algorithms have to take various aspects into account, such as the algorithmic complexity and convergence, its mapping to parallel processing hardware, and the knowledge of certain lighting properties including the capabilities of the human visual system. In this dissertation we exploit both low-level and high-level coherence in the practical design of GI algorithms for a variety of target applications ranging from high-quality production rendering to dynamic real-time rendering. We also focus on automatic rendering-accuracy control to approximate GI in such a way that the error is perceptually unified in the result images, thereby taking not only into account the limitations of the human visual system but also later video compression with an MPEG encoder. In addition, this dissertation provides many ideas and supplementary material, which complements published work and could be of practical relevance.}, }
Endnote
%0 Thesis %A Herzog, Robert %Y Seidel, Hans-Peter %A referee: Bouatouch, Kadi %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Exploiting Coherence in Lighting and Shading Computations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-145C-E %F EDOC: 537272 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %X Computing global illumination (GI) in virtual scenes becomes increasingly attractive even for real-time applications nowadays. GI delivers important cues in the perception of 3D virtual scenes, which is important for material and architectural design. Therefore, for photo-realistic rendering in the design and even the game industry, GI has become indispensable. While the computer simulation of realistic global lighting is well-studied and often considered as solved, computing it efficiently is not. Saving computation costs is therefore the main motivation of current research in GI. Efficient algorithms have to take various aspects into account, such as the algorithmic complexity and convergence, its mapping to parallel processing hardware, and the knowledge of certain lighting properties including the capabilities of the human visual system. In this dissertation we exploit both low-level and high-level coherence in the practical design of GI algorithms for a variety of target applications ranging from high-quality production rendering to dynamic real-time rendering. We also focus on automatic rendering-accuracy control to approximate GI in such a way that the error is perceptually unified in the result images, thereby taking not only into account the limitations of the human visual system but also later video compression with an MPEG encoder. In addition, this dissertation provides many ideas and supplementary material, which complements published work and could be of practical relevance.
[135]
M. Horbach, “Saturation-based Decision Procedures for Fixed Domain and Minimal Model Semantics,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal Herbrand model for the theory. This raises the question in how far superposition calculi can be employed for reasoning about such minimal models. This is indeed often possible when existential properties are considered. However, proving universal properties directly leads to a modification of the minimal model's term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired because it changes the problem. In this thesis, I propose the first superposition calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given fixed domain. It does not eliminate existential variables by Skolemization, but handles them using additional constraints with which each clause is annotated. This calculus is sound and refutationally complete in the limit for a fixed domain semantics. For saturated Horn theories and classes of positive formulas, the calculus is even complete for proving properties of the minimal model itself, going beyond the scope of known superposition-based approaches. The calculus is applicable to every set of clauses with equality and does not rely on any syntactic restrictions of the input. Extensions of the calculus lead to various new decision procedures for minimal model validity. A main feature of these decision procedures is that even the validity of queries containing one quantifier alternation can be decided. In particular, I prove that the validity of any formula with at most one quantifier alternation is decidable in models represented by a finite set of atoms and that the validity of several classes of such formulas is decidable in models represented by so-called disjunctions of implicit generalizations. Moreover, I show that the decision of minimal model validity can be reduced to the superposition-based decision of first-order validity for models of a class of predicative Horn clauses where all function symbols are at most unary.
Export
BibTeX
@phdthesis{Horbach2010PhD, TITLE = {Saturation-based Decision Procedures for Fixed Domain and Minimal Model Semantics}, AUTHOR = {Horbach, Matthias}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-32826}, LOCALID = {Local-ID: C125716C0050FB51-8C390F163CB3D25AC12577EC0037127A-Horbach2010PhD}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal Herbrand model for the theory. This raises the question in how far superposition calculi can be employed for reasoning about such minimal models. This is indeed often possible when existential properties are considered. However, proving universal properties directly leads to a modification of the minimal model's term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired because it changes the problem. In this thesis, I propose the first superposition calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given fixed domain. It does not eliminate existential variables by Skolemization, but handles them using additional constraints with which each clause is annotated. This calculus is sound and refutationally complete in the limit for a fixed domain semantics. For saturated Horn theories and classes of positive formulas, the calculus is even complete for proving properties of the minimal model itself, going beyond the scope of known superposition-based approaches. The calculus is applicable to every set of clauses with equality and does not rely on any syntactic restrictions of the input. Extensions of the calculus lead to various new decision procedures for minimal model validity. A main feature of these decision procedures is that even the validity of queries containing one quantifier alternation can be decided. In particular, I prove that the validity of any formula with at most one quantifier alternation is decidable in models represented by a finite set of atoms and that the validity of several classes of such formulas is decidable in models represented by so-called disjunctions of implicit generalizations. Moreover, I show that the decision of minimal model validity can be reduced to the superposition-based decision of first-order validity for models of a class of predicative Horn clauses where all function symbols are at most unary.}, }
Endnote
%0 Thesis %A Horbach, Matthias %Y Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Saturation-based Decision Procedures for Fixed Domain and Minimal Model Semantics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1453-F %F EDOC: 536344 %F OTHER: Local-ID: C125716C0050FB51-8C390F163CB3D25AC12577EC0037127A-Horbach2010PhD %U urn:nbn:de:bsz:291-scidok-32826 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %X Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal Herbrand model for the theory. This raises the question in how far superposition calculi can be employed for reasoning about such minimal models. This is indeed often possible when existential properties are considered. However, proving universal properties directly leads to a modification of the minimal model's term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired because it changes the problem. In this thesis, I propose the first superposition calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given fixed domain. It does not eliminate existential variables by Skolemization, but handles them using additional constraints with which each clause is annotated. This calculus is sound and refutationally complete in the limit for a fixed domain semantics. For saturated Horn theories and classes of positive formulas, the calculus is even complete for proving properties of the minimal model itself, going beyond the scope of known superposition-based approaches. The calculus is applicable to every set of clauses with equality and does not rely on any syntactic restrictions of the input. Extensions of the calculus lead to various new decision procedures for minimal model validity. A main feature of these decision procedures is that even the validity of queries containing one quantifier alternation can be decided. In particular, I prove that the validity of any formula with at most one quantifier alternation is decidable in models represented by a finite set of atoms and that the validity of several classes of such formulas is decidable in models represented by so-called disjunctions of implicit generalizations. Moreover, I show that the decision of minimal model validity can be reduced to the superposition-based decision of first-order validity for models of a class of predicative Horn clauses where all function symbols are at most unary. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/3282/
[136]
A. Huber, “Randomized Rounding and Rumor Spreading with Stochastic Dependencies,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{HuberPhD2010, TITLE = {Randomized Rounding and Rumor Spreading with Stochastic Dependencies}, AUTHOR = {Huber, Anna}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2010/3425/}, LOCALID = {Local-ID: C1256428004B93B8-954DBB579F1257DCC125783A0053A8B1-HuberPhD2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Huber, Anna %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Randomized Rounding and Rumor Spreading with Stochastic Dependencies : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1444-2 %F EDOC: 536707 %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3425/ %F OTHER: Local-ID: C1256428004B93B8-954DBB579F1257DCC125783A0053A8B1-HuberPhD2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3425/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[137]
M. B. Hullin, “Reconsidering Light Transport : Acquisition and Display of Real-World Reflectance and Geometry,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
In this thesis, we cover three scenarios that violate common simplifying assumptions about the nature of light transport. We begin with the first ingredient to any 3D rendering: a geometry model. Most 3D scanners require the object-of-interest to show diffuse reflectance. The further a material deviates from the Lambertian model, the more likely these setups are to produce corrupted results. By placing a traditional laser scanning setup in a participating (in particular, fluorescent) medium, we have built a light sheet scanner that delivers robust results for a wide range of materials, including glass. Further investigating the phenomenon of fluorescence, we notice that, despite its ubiquity, it has received moderate attention in computer graphics. In particular, to date no data-driven reflectance models of fluorescent materials have been available. To describe the wavelength-shifting reflectance of fluorescent materials, we define the bispectral bidirectional reflectance and reradiation distribution function (BRRDF), for which we introduce an image-based measurement setup as well as an effcient acquisition scheme. Finally, we envision a computer display that shows materials instead of colours, and present a prototypical device that can exhibit anisotropic reflectance distributions similar to com- mon models in computer graphics.
Export
BibTeX
@phdthesis{HullinPhD2010, TITLE = {Reconsidering Light Transport : Acquisition and Display of Real-World Reflectance and Geometry}, AUTHOR = {Hullin, Matthias B.}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-40235}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {In this thesis, we cover three scenarios that violate common simplifying assumptions about the nature of light transport. We begin with the first ingredient to any 3D rendering: a geometry model. Most 3D scanners require the object-of-interest to show diffuse reflectance. The further a material deviates from the Lambertian model, the more likely these setups are to produce corrupted results. By placing a traditional laser scanning setup in a participating (in particular, fluorescent) medium, we have built a light sheet scanner that delivers robust results for a wide range of materials, including glass. Further investigating the phenomenon of fluorescence, we notice that, despite its ubiquity, it has received moderate attention in computer graphics. In particular, to date no data-driven reflectance models of fluorescent materials have been available. To describe the wavelength-shifting reflectance of fluorescent materials, we define the bispectral bidirectional reflectance and reradiation distribution function (BRRDF), for which we introduce an image-based measurement setup as well as an effcient acquisition scheme. Finally, we envision a computer display that shows materials instead of colours, and present a prototypical device that can exhibit anisotropic reflectance distributions similar to com- mon models in computer graphics.}, }
Endnote
%0 Thesis %A Hullin, Matthias B. %Y Seidel, Hans-Peter %A referee: Lensch, Hendrik P. A. %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Reconsidering Light Transport : Acquisition and Display of Real-World Reflectance and Geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1433-8 %F EDOC: 537322 %U urn:nbn:de:bsz:291-scidok-40235 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %X In this thesis, we cover three scenarios that violate common simplifying assumptions about the nature of light transport. We begin with the first ingredient to any 3D rendering: a geometry model. Most 3D scanners require the object-of-interest to show diffuse reflectance. The further a material deviates from the Lambertian model, the more likely these setups are to produce corrupted results. By placing a traditional laser scanning setup in a participating (in particular, fluorescent) medium, we have built a light sheet scanner that delivers robust results for a wide range of materials, including glass. Further investigating the phenomenon of fluorescence, we notice that, despite its ubiquity, it has received moderate attention in computer graphics. In particular, to date no data-driven reflectance models of fluorescent materials have been available. To describe the wavelength-shifting reflectance of fluorescent materials, we define the bispectral bidirectional reflectance and reradiation distribution function (BRRDF), for which we introduce an image-based measurement setup as well as an effcient acquisition scheme. Finally, we envision a computer display that shows materials instead of colours, and present a prototypical device that can exhibit anisotropic reflectance distributions similar to com- mon models in computer graphics. %U http://scidok.sulb.uni-saarland.de/volltexte/2011/4023/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[138]
C. Ihlemann, “Reasoning in Combinations of Theories,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Verification problems are often expressed in a language which mixes several theories. A natural question to ask is whether one can use decision procedures for individual theories to construct a decision procedure for the union theory. In the cases where this is possible one has a powerful method at hand to handle complex theories effectively. The setup considered in this thesis is that of one base theory which is extended by one or more theories. The question is if and when a given ground satisfiability problem in the extended setting can be effectively reduced to an equi-satisfiable problem over the base theory. A case where this reductive approach is always possible is that of so-called \emph{local theory extensions.} The theory of local extensions is developed and some applications concerning monotone functions are given. Then the theory of local theory extensions is generalized in order to deal with data structures that exhibit local behavior. It will be shown that a suitable fragment of both the theory of arrays and the theory of pointers is local in this broader sense. % Finally, the case of more than one theory extension is discussed. In particular, a \emph{modularity} result is given that under certain circumstances the locality of each of the extensions lifts to locality of the entire extension. The reductive approach outlined above has become particularly relevant in recent years due to the rise of powerful solvers for background theories common in verification tasks. These so-called SMT-solvers effectively handle theories such as real linear or integer arithmetic. As part of this thesis, a program called \emph{\mbox{H-PILoT}} was implemented which carries out reductive reasoning for local theory extensions. H-PILoT found applications in mathematics, multiple-valued logics, data-structures and reasoning in complex systems.
Export
BibTeX
@phdthesis{IhlemannDiss2010, TITLE = {Reasoning in Combinations of Theories}, AUTHOR = {Ihlemann, Carsten}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2010/3472/}, LOCALID = {Local-ID: C125716C0050FB51-82ED9E54BEB32A4AC12577FF00605F42-IhlemannDiss2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Verification problems are often expressed in a language which mixes several theories. A natural question to ask is whether one can use decision procedures for individual theories to construct a decision procedure for the union theory. In the cases where this is possible one has a powerful method at hand to handle complex theories effectively. The setup considered in this thesis is that of one base theory which is extended by one or more theories. The question is if and when a given ground satisfiability problem in the extended setting can be effectively reduced to an equi-satisfiable problem over the base theory. A case where this reductive approach is always possible is that of so-called \emph{local theory extensions.} The theory of local extensions is developed and some applications concerning monotone functions are given. Then the theory of local theory extensions is generalized in order to deal with data structures that exhibit local behavior. It will be shown that a suitable fragment of both the theory of arrays and the theory of pointers is local in this broader sense. % Finally, the case of more than one theory extension is discussed. In particular, a \emph{modularity} result is given that under certain circumstances the locality of each of the extensions lifts to locality of the entire extension. The reductive approach outlined above has become particularly relevant in recent years due to the rise of powerful solvers for background theories common in verification tasks. These so-called SMT-solvers effectively handle theories such as real linear or integer arithmetic. As part of this thesis, a program called \emph{\mbox{H-PILoT}} was implemented which carries out reductive reasoning for local theory extensions. H-PILoT found applications in mathematics, multiple-valued logics, data-structures and reasoning in complex systems.}, }
Endnote
%0 Thesis %A Ihlemann, Carsten %Y Sofronie-Stokkermans, Viorica %A referee: Ghilardi, Silvio %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Reasoning in Combinations of Theories : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-144B-3 %F EDOC: 536351 %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3472/ %F OTHER: Local-ID: C125716C0050FB51-82ED9E54BEB32A4AC12577FF00605F42-IhlemannDiss2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %X Verification problems are often expressed in a language which mixes several theories. A natural question to ask is whether one can use decision procedures for individual theories to construct a decision procedure for the union theory. In the cases where this is possible one has a powerful method at hand to handle complex theories effectively. The setup considered in this thesis is that of one base theory which is extended by one or more theories. The question is if and when a given ground satisfiability problem in the extended setting can be effectively reduced to an equi-satisfiable problem over the base theory. A case where this reductive approach is always possible is that of so-called \emph{local theory extensions.} The theory of local extensions is developed and some applications concerning monotone functions are given. Then the theory of local theory extensions is generalized in order to deal with data structures that exhibit local behavior. It will be shown that a suitable fragment of both the theory of arrays and the theory of pointers is local in this broader sense. % Finally, the case of more than one theory extension is discussed. In particular, a \emph{modularity} result is given that under certain circumstances the locality of each of the extensions lifts to locality of the entire extension. The reductive approach outlined above has become particularly relevant in recent years due to the rise of powerful solvers for background theories common in verification tasks. These so-called SMT-solvers effectively handle theories such as real linear or integer arithmetic. As part of this thesis, a program called \emph{\mbox{H-PILoT}} was implemented which carries out reductive reasoning for local theory extensions. H-PILoT found applications in mathematics, multiple-valued logics, data-structures and reasoning in complex systems. %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3472/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[139]
S. Jacobs, “Hierarchic Decision Procedures for Verification,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{JacobsDiss2010, TITLE = {Hierarchic Decision Procedures for Verification}, AUTHOR = {Jacobs, Swen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-29478}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Jacobs, Swen %Y Sofronie-Stokkermans, Viorica %A referee: Kunca, Victor %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Hierarchic Decision Procedures for Verification : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-16E7-6 %U urn:nbn:de:bsz:291-scidok-29478 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %P 121 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/2947/
[140]
D. Johannsen, “Random combinatorial structures and randomized search heuristics,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{JohannsenPhD2010, TITLE = {Random combinatorial structures and randomized search heuristics}, AUTHOR = {Johannsen, Daniel}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2011/3529/}, LOCALID = {Local-ID: C1256428004B93B8-292FDC32C659172EC125783A00455378-JohannsenPhD2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Johannsen, Daniel %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Random combinatorial structures and randomized search heuristics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1458-5 %F EDOC: 536705 %U http://scidok.sulb.uni-saarland.de/volltexte/2011/3529/ %F OTHER: Local-ID: C1256428004B93B8-292FDC32C659172EC125783A00455378-JohannsenPhD2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2011/3529/
[141]
S. Kratsch, “Kernelization of generic problems : upper and lower bounds,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{KratschPhD2010, TITLE = {Kernelization of generic problems : upper and lower bounds}, AUTHOR = {Kratsch, Stefan}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256428004B93B8-E103020C69961AC3C125783A005312D9-KratschPhD2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Kratsch, Stefan %Y Mehlhorn, Kurt %A referee: Bodlaender, Hans L. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Kernelization of generic problems : upper and lower bounds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1448-9 %F EDOC: 536706 %F OTHER: Local-ID: C1256428004B93B8-E103020C69961AC3C125783A005312D9-KratschPhD2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2011/3530/
[142]
E. Pyrga, “Algorithmic Game Theory and Networks,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
In this thesis we are studying three different problems that belong to the intersection of Game Theory and Computer Science. The first concerns the design of efficient protocols for a Contention Resolution problem regarding selfish users who all need to transmit information over a common single–access channel. We will provide efficient solutions for different variants of the problem, depending on the feedback that the users can receive from the channel. The second problem concerns the Price of Stability of a fair cost sharing Network Design problem for undirected graphs. We consider the general case for which the best known upper bound is the Harmonic number Hn, where n is the number of players, and the best known lower bound is 12=7 ~ 1:778. We improve the value of the previously best lower bound to 42=23 ~ 1:8261. Furthermore, we study two and three players instances. Our upper bounds indicate a separation between the Price of Stability on undirected graphs and that on directed graphs, where Hn is tight. Previously, such a gap was only known for the cases where all players shared a terminal, and for weighted players. Finally, the last problem applies Game Theory as an evaluation tool for a computer system: we will employ the concept of Stochastic Stability from Evolutionary Game Theory as a measure for the efficiency of different queue policies that can be employed at an Internet router.
Export
BibTeX
@phdthesis{PyrgaPhD2010, TITLE = {Algorithmic Game Theory and Networks}, AUTHOR = {Pyrga, Evangelia}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256428004B93B8-52F6ED55CD6BDF80C125783A003A69DE-PyrgaPhD2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {In this thesis we are studying three different problems that belong to the intersection of Game Theory and Computer Science. The first concerns the design of efficient protocols for a Contention Resolution problem regarding selfish users who all need to transmit information over a common single--access channel. We will provide efficient solutions for different variants of the problem, depending on the feedback that the users can receive from the channel. The second problem concerns the Price of Stability of a fair cost sharing Network Design problem for undirected graphs. We consider the general case for which the best known upper bound is the Harmonic number Hn, where n is the number of players, and the best known lower bound is 12=7 ~ 1:778. We improve the value of the previously best lower bound to 42=23 ~ 1:8261. Furthermore, we study two and three players instances. Our upper bounds indicate a separation between the Price of Stability on undirected graphs and that on directed graphs, where Hn is tight. Previously, such a gap was only known for the cases where all players shared a terminal, and for weighted players. Finally, the last problem applies Game Theory as an evaluation tool for a computer system: we will employ the concept of Stochastic Stability from Evolutionary Game Theory as a measure for the efficiency of different queue policies that can be employed at an Internet router.}, }
Endnote
%0 Thesis %A Pyrga, Evangelia %Y Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Algorithmic Game Theory and Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1477-1 %F EDOC: 536704 %F OTHER: Local-ID: C1256428004B93B8-52F6ED55CD6BDF80C125783A003A69DE-PyrgaPhD2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %X In this thesis we are studying three different problems that belong to the intersection of Game Theory and Computer Science. The first concerns the design of efficient protocols for a Contention Resolution problem regarding selfish users who all need to transmit information over a common single&#8211;access channel. We will provide efficient solutions for different variants of the problem, depending on the feedback that the users can receive from the channel. The second problem concerns the Price of Stability of a fair cost sharing Network Design problem for undirected graphs. We consider the general case for which the best known upper bound is the Harmonic number Hn, where n is the number of players, and the best known lower bound is 12=7 ~ 1:778. We improve the value of the previously best lower bound to 42=23 ~ 1:8261. Furthermore, we study two and three players instances. Our upper bounds indicate a separation between the Price of Stability on undirected graphs and that on directed graphs, where Hn is tight. Previously, such a gap was only known for the cases where all players shared a terminal, and for weighted players. Finally, the last problem applies Game Theory as an evaluation tool for a computer system: we will employ the concept of Stochastic Stability from Evolutionary Game Theory as a measure for the efficiency of different queue policies that can be employed at an Internet router.
[143]
W. Saleem, “Digital Processing and Management Tools for 2D and 3D Shape Repositories,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{SaleemPhD2010, TITLE = {Digital Processing and Management Tools for {2D} and {3D} Shape Repositories}, AUTHOR = {Saleem, Waqar}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Saleem, Waqar %Y Seidel, Hans-Peter %A referee: Belyaev, Alexander %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Digital Processing and Management Tools for 2D and 3D Shape Repositories : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1467-3 %F EDOC: 537271 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %P XIV, 137 p. %V phd %9 phd
[144]
A. Schlicker, “Ontology-based Similarity Measures and their Application in Bioinformatics,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Genome-wide sequencing projects of many different organisms produce large numbers of sequences that are functionally characterized using experimental and bioinformatics methods. Following the development of the first bio-ontologies, knowledge of the functions of genes and proteins is increasingly made available in a standardized format. This allows for devising approaches that directly exploit functional information using semantic and functional similarity measures. This thesis addresses different aspects of the development and application of such similarity measures. First, we analyze semantic and functional similarity measures and apply them for investigating the functional space in different taxa. Second, a new software program and a new database are described, which overcome limitations of existing tools and simplify the utilization of similarity measures for different applications. Third, we delineate two applications of our functional similarity measures. We utilize them for analyzing domain and protein interaction datasets and derive thresholds for grouping predicted domain interactions into low- and high-confidence subsets. We also present the new MedSim method for prioritization of candidate disease genes, which is based on the observation that genes and proteins contributing to similar diseases are functionally related. We demonstrate that the MedSim method performs at least as well as more complex state-of-the-art methods and significantly outperforms current methods that also utilize functional annotation.
Export
BibTeX
@phdthesis{Schlicker2010, TITLE = {Ontology-based Similarity Measures and their Application in Bioinformatics}, AUTHOR = {Schlicker, Andreas}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2010/3429/}, LOCALID = {Local-ID: C125673F004B2D7B-E9C50306DF804193C12577EB00373178-Schlicker2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Genome-wide sequencing projects of many different organisms produce large numbers of sequences that are functionally characterized using experimental and bioinformatics methods. Following the development of the first bio-ontologies, knowledge of the functions of genes and proteins is increasingly made available in a standardized format. This allows for devising approaches that directly exploit functional information using semantic and functional similarity measures. This thesis addresses different aspects of the development and application of such similarity measures. First, we analyze semantic and functional similarity measures and apply them for investigating the functional space in different taxa. Second, a new software program and a new database are described, which overcome limitations of existing tools and simplify the utilization of similarity measures for different applications. Third, we delineate two applications of our functional similarity measures. We utilize them for analyzing domain and protein interaction datasets and derive thresholds for grouping predicted domain interactions into low- and high-confidence subsets. We also present the new MedSim method for prioritization of candidate disease genes, which is based on the observation that genes and proteins contributing to similar diseases are functionally related. We demonstrate that the MedSim method performs at least as well as more complex state-of-the-art methods and significantly outperforms current methods that also utilize functional annotation.}, }
Endnote
%0 Thesis %A Schlicker, Andreas %Y Lengauer, Thomas %A referee: Albrecht, Mario %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Ontology-based Similarity Measures and their Application in Bioinformatics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-143A-9 %F EDOC: 536633 %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3429/ %F OTHER: Local-ID: C125673F004B2D7B-E9C50306DF804193C12577EB00373178-Schlicker2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %X Genome-wide sequencing projects of many different organisms produce large numbers of sequences that are functionally characterized using experimental and bioinformatics methods. Following the development of the first bio-ontologies, knowledge of the functions of genes and proteins is increasingly made available in a standardized format. This allows for devising approaches that directly exploit functional information using semantic and functional similarity measures. This thesis addresses different aspects of the development and application of such similarity measures. First, we analyze semantic and functional similarity measures and apply them for investigating the functional space in different taxa. Second, a new software program and a new database are described, which overcome limitations of existing tools and simplify the utilization of similarity measures for different applications. Third, we delineate two applications of our functional similarity measures. We utilize them for analyzing domain and protein interaction datasets and derive thresholds for grouping predicted domain interactions into low- and high-confidence subsets. We also present the new MedSim method for prioritization of candidate disease genes, which is based on the observation that genes and proteins contributing to similar diseases are functionally related. We demonstrate that the MedSim method performs at least as well as more complex state-of-the-art methods and significantly outperforms current methods that also utilize functional annotation. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/3429/
[145]
A. Starostin, “Formal Verification of Demand Paging,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{Starostin2010, TITLE = {Formal Verification of Demand Paging}, AUTHOR = {Starostin, Artem}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Starostin, Artem %Y Paul, Wolfgang %A referee: Wilhelm, Reinhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Formal Verification of Demand Paging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B3D1-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd
[146]
H. Zhu, “Characterization, Classification and Alignment of Protein-protein Interfaces,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@phdthesis{Zhu2010a, TITLE = {Characterization, Classification and Alignment of Protein-protein Interfaces}, AUTHOR = {Zhu, Hongbo}, LANGUAGE = {eng}, URL = {http://scidok.sulb.uni-saarland.de/volltexte/2010/3278/}, LOCALID = {Local-ID: C125673F004B2D7B-F18C31CDC42018BBC1257834004000B5-Zhu2010a}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Zhu, Hongbo %Y Lengauer, Thomas %A referee: Lackner, Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Characterization, Classification and Alignment of Protein-protein Interfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1461-F %F EDOC: 536665 %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3278/ %F OTHER: Local-ID: C125673F004B2D7B-F18C31CDC42018BBC1257834004000B5-Zhu2010a %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/3278/
2009
[147]
N. Ahmed, “High Quality Dynamic Reflectance and Surface Reconstruction from Video,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
The creation of high quality animations of real-world human actors has long been a challenging problem in computer graphics. It involves the modeling of the shape of the virtual actors, creating their motion, and the reproduction of very fine dynamic details. In order to render the actor under arbitrary lighting, it is required that reflectance properties are modeled for each point on the surface. These steps, that are usually performed manually by professional modelers, are time consuming and cumbersome. In this thesis, we show that algorithmic solutions for some of the problems that arise in the creation of high quality animation of real-world people are possible using multi-view video data. First, we present a novel spatio-temporal approach to create a personalized avatar from multi-view video data of a moving person. Thereafter, we propose two enhancements to a method that captures human shape, motion and reflectance properties of amoving human using eightmulti-view video streams. Afterwards we extend this work, and in order to add very fine dynamic details to the geometric models, such as wrinkles and folds in the clothing, we make use of the multi-view video recordings and present a statistical method that can passively capture the fine-grain details of time-varying scene geometry. Finally, in order to reconstruct structured shape and animation of the subject from video, we present a dense 3D correspondence finding method that enables spatiotemporally coherent reconstruction of surface animations directly frommulti-view video data. These algorithmic solutions can be combined to constitute a complete animation pipeline for acquisition, reconstruction and rendering of high quality virtual actors from multi-view video data. They can also be used individually in a system that require the solution of a specific algorithmic sub-problem. The results demonstrate that using multi-view video data it is possible to find the model description that enables realistic appearance of animated virtual actors under different lighting conditions and exhibits high quality dynamic details in the geometry.
Export
BibTeX
@phdthesis{Ahmed2009, TITLE = {High Quality Dynamic Reflectance and Surface Reconstruction from Video}, AUTHOR = {Ahmed, Naveed}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-37B1DB1BA6379517C12576C5003D26A2-Ahmed2009}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {The creation of high quality animations of real-world human actors has long been a challenging problem in computer graphics. It involves the modeling of the shape of the virtual actors, creating their motion, and the reproduction of very fine dynamic details. In order to render the actor under arbitrary lighting, it is required that reflectance properties are modeled for each point on the surface. These steps, that are usually performed manually by professional modelers, are time consuming and cumbersome. In this thesis, we show that algorithmic solutions for some of the problems that arise in the creation of high quality animation of real-world people are possible using multi-view video data. First, we present a novel spatio-temporal approach to create a personalized avatar from multi-view video data of a moving person. Thereafter, we propose two enhancements to a method that captures human shape, motion and reflectance properties of amoving human using eightmulti-view video streams. Afterwards we extend this work, and in order to add very fine dynamic details to the geometric models, such as wrinkles and folds in the clothing, we make use of the multi-view video recordings and present a statistical method that can passively capture the fine-grain details of time-varying scene geometry. Finally, in order to reconstruct structured shape and animation of the subject from video, we present a dense 3D correspondence finding method that enables spatiotemporally coherent reconstruction of surface animations directly frommulti-view video data. These algorithmic solutions can be combined to constitute a complete animation pipeline for acquisition, reconstruction and rendering of high quality virtual actors from multi-view video data. They can also be used individually in a system that require the solution of a specific algorithmic sub-problem. The results demonstrate that using multi-view video data it is possible to find the model description that enables realistic appearance of animated virtual actors under different lighting conditions and exhibits high quality dynamic details in the geometry.}, }
Endnote
%0 Thesis %A Ahmed, Naveed %Y Seidel, Hans-Peter %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Quality Dynamic Reflectance and Surface Reconstruction from Video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-3113-F %F EDOC: 520439 %F OTHER: Local-ID: C125675300671F7B-37B1DB1BA6379517C12576C5003D26A2-Ahmed2009 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X The creation of high quality animations of real-world human actors has long been a challenging problem in computer graphics. It involves the modeling of the shape of the virtual actors, creating their motion, and the reproduction of very fine dynamic details. In order to render the actor under arbitrary lighting, it is required that reflectance properties are modeled for each point on the surface. These steps, that are usually performed manually by professional modelers, are time consuming and cumbersome. In this thesis, we show that algorithmic solutions for some of the problems that arise in the creation of high quality animation of real-world people are possible using multi-view video data. First, we present a novel spatio-temporal approach to create a personalized avatar from multi-view video data of a moving person. Thereafter, we propose two enhancements to a method that captures human shape, motion and reflectance properties of amoving human using eightmulti-view video streams. Afterwards we extend this work, and in order to add very fine dynamic details to the geometric models, such as wrinkles and folds in the clothing, we make use of the multi-view video recordings and present a statistical method that can passively capture the fine-grain details of time-varying scene geometry. Finally, in order to reconstruct structured shape and animation of the subject from video, we present a dense 3D correspondence finding method that enables spatiotemporally coherent reconstruction of surface animations directly frommulti-view video data. These algorithmic solutions can be combined to constitute a complete animation pipeline for acquisition, reconstruction and rendering of high quality virtual actors from multi-view video data. They can also be used individually in a system that require the solution of a specific algorithmic sub-problem. The results demonstrate that using multi-view video data it is possible to find the model description that enables realistic appearance of animated virtual actors under different lighting conditions and exhibits high quality dynamic details in the geometry. %U http://scidok.sulb.uni-saarland.de/volltexte/2009/2561/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[148]
R. Angelova, “Graph-based Classification and Clustering of Entities in Heterogeneous Networks,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
We address the problem of multi-label classification of relational graphs by proposing a framework that models the input graph as a first order Markov random field and devises a relaxation labeling procedure to find its maximally likely labeling. We apply this framework to classification as well as clustering problems in homogeneous networks and show significant performance gains in comparison to state-of-the-art techniques. We also address the problem of multi-label classification in heterogeneous networks where every data point is associated with a node type and has to be labeled with one or more classes from a type-specific finite set of classes. Our algorithm is based on a random walk model. We present detailed empirical studies of our model and compare it with state-of-art techniques on two social networks. All newly proposed algorithms are robust to scarce training data and diverse linkage patterns. They improve classification or clustering quality in homogeneous and heterogeneous networks.
Export
BibTeX
@phdthesis{Angelova2009z, TITLE = {Graph-based Classification and Clustering of Entities in Heterogeneous Networks}, AUTHOR = {Angelova, Ralitsa}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-52DE6083AAC46318C12576C500368667-Angelova2009z}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {We address the problem of multi-label classification of relational graphs by proposing a framework that models the input graph as a first order Markov random field and devises a relaxation labeling procedure to find its maximally likely labeling. We apply this framework to classification as well as clustering problems in homogeneous networks and show significant performance gains in comparison to state-of-the-art techniques. We also address the problem of multi-label classification in heterogeneous networks where every data point is associated with a node type and has to be labeled with one or more classes from a type-specific finite set of classes. Our algorithm is based on a random walk model. We present detailed empirical studies of our model and compare it with state-of-art techniques on two social networks. All newly proposed algorithms are robust to scarce training data and diverse linkage patterns. They improve classification or clustering quality in homogeneous and heterogeneous networks.}, }
Endnote
%0 Thesis %A Angelova, Ralitsa %Y Milios, Evangelos E. %A referee: Keim, Daniel A. %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Graph-based Classification and Clustering of Entities in Heterogeneous Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17B1-3 %F EDOC: 520436 %F OTHER: Local-ID: C1256DBF005F876D-52DE6083AAC46318C12576C500368667-Angelova2009z %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X We address the problem of multi-label classification of relational graphs by proposing a framework that models the input graph as a first order Markov random field and devises a relaxation labeling procedure to find its maximally likely labeling. We apply this framework to classification as well as clustering problems in homogeneous networks and show significant performance gains in comparison to state-of-the-art techniques. We also address the problem of multi-label classification in heterogeneous networks where every data point is associated with a node type and has to be labeled with one or more classes from a type-specific finite set of classes. Our algorithm is based on a random walk model. We present detailed empirical studies of our model and compare it with state-of-art techniques on two social networks. All newly proposed algorithms are robust to scarce training data and diverse linkage patterns. They improve classification or clustering quality in homogeneous and heterogeneous networks.
[149]
C. Fuchs, “Capturing and Reconstructing the Appearance of Complex 3D Scenes,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
In this thesis, we present our research on new acquisition methods for reflectance properties of real-world objects. Specifically, we first show a method for acquiring spatially varying densities in volumes of translucent, gaseous material with just a single image. This makes the method applicable to constantly changing phenomena like smoke without the use of high-speed camera equipment. Furthermore, we investigated how two well known techniques -- synthetic aperture confocal imaging and algorithmic descattering -- can be combined to help looking through a translucent medium like fog or murky water. We show that the depth at which we can still see an object embedded in the scattering medium is increased. In a related publication, we show how polarization and descattering based on phase-shifting can be combined for efficient 3D~scanning of translucent objects. Normally, subsurface scattering hinders the range estimation by offsetting the peak intensity beneath the surface away from the point of incidence. With our method, the subsurface scattering is reduced to a minimum and therefore reliable 3D~scanning is made possible. Finally, we present a system which recovers surface geometry, reflectance properties of opaque objects, and prevailing lighting conditions at the time of image capture from just a small number of input photographs. While there exist previous approaches to recover reflectance properties, our system is the first to work on images taken under almost arbitrary, changing lighting conditions. This enables us to use images we took from a community photo collection website.
Export
BibTeX
@phdthesis{Fuchs2009:Thesis, TITLE = {Capturing and Reconstructing the Appearance of Complex {3D} Scenes}, AUTHOR = {Fuchs, Christian}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-85B64E48E51CA543C12576C700357A64-Fuchs2009:Thesis}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {In this thesis, we present our research on new acquisition methods for reflectance properties of real-world objects. Specifically, we first show a method for acquiring spatially varying densities in volumes of translucent, gaseous material with just a single image. This makes the method applicable to constantly changing phenomena like smoke without the use of high-speed camera equipment. Furthermore, we investigated how two well known techniques -- synthetic aperture confocal imaging and algorithmic descattering -- can be combined to help looking through a translucent medium like fog or murky water. We show that the depth at which we can still see an object embedded in the scattering medium is increased. In a related publication, we show how polarization and descattering based on phase-shifting can be combined for efficient 3D~scanning of translucent objects. Normally, subsurface scattering hinders the range estimation by offsetting the peak intensity beneath the surface away from the point of incidence. With our method, the subsurface scattering is reduced to a minimum and therefore reliable 3D~scanning is made possible. Finally, we present a system which recovers surface geometry, reflectance properties of opaque objects, and prevailing lighting conditions at the time of image capture from just a small number of input photographs. While there exist previous approaches to recover reflectance properties, our system is the first to work on images taken under almost arbitrary, changing lighting conditions. This enables us to use images we took from a community photo collection website.}, }
Endnote
%0 Thesis %A Fuchs, Christian %Y Seidel, Hans-Peter %A referee: Lensch, Hendrik P. A. %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Capturing and Reconstructing the Appearance of Complex 3D Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17C8-2 %F EDOC: 520445 %F OTHER: Local-ID: C125675300671F7B-85B64E48E51CA543C12576C700357A64-Fuchs2009:Thesis %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X In this thesis, we present our research on new acquisition methods for reflectance properties of real-world objects. Specifically, we first show a method for acquiring spatially varying densities in volumes of translucent, gaseous material with just a single image. This makes the method applicable to constantly changing phenomena like smoke without the use of high-speed camera equipment. Furthermore, we investigated how two well known techniques -- synthetic aperture confocal imaging and algorithmic descattering -- can be combined to help looking through a translucent medium like fog or murky water. We show that the depth at which we can still see an object embedded in the scattering medium is increased. In a related publication, we show how polarization and descattering based on phase-shifting can be combined for efficient 3D~scanning of translucent objects. Normally, subsurface scattering hinders the range estimation by offsetting the peak intensity beneath the surface away from the point of incidence. With our method, the subsurface scattering is reduced to a minimum and therefore reliable 3D~scanning is made possible. Finally, we present a system which recovers surface geometry, reflectance properties of opaque objects, and prevailing lighting conditions at the time of image capture from just a small number of input photographs. While there exist previous approaches to recover reflectance properties, our system is the first to work on images taken under almost arbitrary, changing lighting conditions. This enables us to use images we took from a community photo collection website.
[150]
J. Gall, “Filtering and Optimization Strategies for Markerless Human Motion Capture with Skeleton-based Shape Models,” Universität des Saarlandes, Saarbrücken, 2009.
Export
BibTeX
@phdthesis{Gall2009:Thesis, TITLE = {Filtering and Optimization Strategies for Markerless Human Motion Capture with Skeleton-based Shape Models}, AUTHOR = {Gall, J{\"u}rgen}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-3CBDBCBF914EC213C12576C70033D9BD-Gall2009:Thesis}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, }
Endnote
%0 Thesis %A Gall, J&#252;rgen %Y Seidel, Hans-Peter %A referee: Rosenhahn, Bodo %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Filtering and Optimization Strategies for Markerless Human Motion Capture with Skeleton-based Shape Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17BC-D %F EDOC: 520443 %F OTHER: Local-ID: C125675300671F7B-3CBDBCBF914EC213C12576C70033D9BD-Gall2009:Thesis %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd
[151]
E. Happ, “Analyses of Evolutionary Algorithms,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Evolutionary algorithms (EAs) are a highly successful tool commonly used in practice to solve algorithmic problems. This remarkable practical value, however, is not backed up by a deep theoretical understanding. Such an understanding would facilitate the application of EAs to further problems. Runtime analyses of EAs are one way to expand the theoretical knowledge in this field. This thesis presents runtime analyses for three prominent problems in combinatorial optimization. Additionally, it provides probability theoretical tools that will simplify future runtime analyses of EAs. The first problem considered is the Single Source Shortest Path problem. The task is to find in a weighted graph for a given source vertex shortest paths to all other vertices. Developing a new analysis method we can give tight bounds on the runtime of a previously designed and analyzed EA for this problem. The second problem is the All-Pairs Shortest Path problem. Given a weighted graph, one has to find a shortest path for every pair of vertices in the graph. For this problem we show that adding a crossover operator to a natural EA using only mutation provably decreases the runtime. This is the first time that the usefulness of a crossover operator was shown for a combinatorial problem. The third problem considered is the Sorting problem. For this problem, we design a new representation based on trees. We show that the EA naturally arising from this representation has a better runtime than previously analyzed EAs.
Export
BibTeX
@phdthesis{Happ2009, TITLE = {Analyses of Evolutionary Algorithms}, AUTHOR = {Happ, Edda}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Evolutionary algorithms (EAs) are a highly successful tool commonly used in practice to solve algorithmic problems. This remarkable practical value, however, is not backed up by a deep theoretical understanding. Such an understanding would facilitate the application of EAs to further problems. Runtime analyses of EAs are one way to expand the theoretical knowledge in this field. This thesis presents runtime analyses for three prominent problems in combinatorial optimization. Additionally, it provides probability theoretical tools that will simplify future runtime analyses of EAs. The first problem considered is the Single Source Shortest Path problem. The task is to find in a weighted graph for a given source vertex shortest paths to all other vertices. Developing a new analysis method we can give tight bounds on the runtime of a previously designed and analyzed EA for this problem. The second problem is the All-Pairs Shortest Path problem. Given a weighted graph, one has to find a shortest path for every pair of vertices in the graph. For this problem we show that adding a crossover operator to a natural EA using only mutation provably decreases the runtime. This is the first time that the usefulness of a crossover operator was shown for a combinatorial problem. The third problem considered is the Sorting problem. For this problem, we design a new representation based on trees. We show that the EA naturally arising from this representation has a better runtime than previously analyzed EAs.}, }
Endnote
%0 Thesis %A Happ, Edda %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Analyses of Evolutionary Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B4B2-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X Evolutionary algorithms (EAs) are a highly successful tool commonly used in practice to solve algorithmic problems. This remarkable practical value, however, is not backed up by a deep theoretical understanding. Such an understanding would facilitate the application of EAs to further problems. Runtime analyses of EAs are one way to expand the theoretical knowledge in this field. This thesis presents runtime analyses for three prominent problems in combinatorial optimization. Additionally, it provides probability theoretical tools that will simplify future runtime analyses of EAs. The first problem considered is the Single Source Shortest Path problem. The task is to find in a weighted graph for a given source vertex shortest paths to all other vertices. Developing a new analysis method we can give tight bounds on the runtime of a previously designed and analyzed EA for this problem. The second problem is the All-Pairs Shortest Path problem. Given a weighted graph, one has to find a shortest path for every pair of vertices in the graph. For this problem we show that adding a crossover operator to a natural EA using only mutation provably decreases the runtime. This is the first time that the usefulness of a crossover operator was shown for a combinatorial problem. The third problem considered is the Sorting problem. For this problem, we design a new representation based on trees. We show that the EA naturally arising from this representation has a better runtime than previously analyzed EAs. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2009/2427/
[152]
G. Ifrim, “Statistical Learning Techniques for Text Categorization with Sparse Labeled Data,” Universität des Saarlandes, Saarbrücken, 2009.
Export
BibTeX
@phdthesis{Ifrim-PhD-2009, TITLE = {Statistical Learning Techniques for Text Categorization with Sparse Labeled Data}, AUTHOR = {Ifrim, Georgiana}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-930B3ABFA8D01086C125756A0054EAEA-Ifrim-PhD-2009}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, }
Endnote
%0 Thesis %A Ifrim, Georgiana %Y Weikum, Gerhard %A referee: Hofmann, Thomas %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Statistical Learning Techniques for Text Categorization with Sparse Labeled Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17EB-3 %F EDOC: 520396 %F OTHER: Local-ID: C1256DBF005F876D-930B3ABFA8D01086C125756A0054EAEA-Ifrim-PhD-2009 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2009/2236/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[153]
G. Kasneci, “Searching and Ranking in Entity-Relationship Graphs,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
The Web bears the potential to become the world's most comprehensive knowledge base. Organizing information from the Web into entity-relationship graph structures could be a first step towards unleashing this potential. In a second step, the inherent semantics of such structures would have to be exploited by expressive search techniques that go beyond today's keyword search paradigm. In this realm, as a first contribution of this thesis, we present NAGA (\textbf{N}ot \textbf{A}nother \textbf{G}oogle \textbf{A}nswer), a new semantic search engine. NAGA provides an expressive, graph-based query language that enables queries with entities and relationships. The results are retrieved based on subgraph matching techniques and ranked by means of a statistical ranking model. As a second contribution, we present STAR (\textbf{S}teiner \textbf{T}ree \textbf{A}pproximation in \textbf{R}elationship Graphs), an efficient technique for finding ``close'' relations (i.e., compact connections) between $k(\geq 2)$ entities of interest in large entity-relationship graphs. Our third contribution is MING (\textbf{M}ining\textbf{In}formative \textbf{G}raphs). MING is an efficient method for retrieving ``informative'' subgraphs for $k(\geq 2)$ entities of interest from an entity-relationship graph. Intuitively, these would be subgraphs that can explain the relations between the $k$ entities of interest. The knowledge discovery tasks supported by MING have a stronger semantic flavor than the ones supported by STAR. STAR and MING are integrated into the query answering component of the NAGA engine. NAGA itself is a fully implemented prototype system and is part of the YAGO-NAGA project.
Export
BibTeX
@phdthesis{KasneciPhD2009, TITLE = {Searching and Ranking in Entity-Relationship Graphs}, AUTHOR = {Kasneci, Gjergji}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-2C25044AD8D088FDC125763B003D5810-KasneciPhD2009}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {The Web bears the potential to become the world's most comprehensive knowledge base. Organizing information from the Web into entity-relationship graph structures could be a first step towards unleashing this potential. In a second step, the inherent semantics of such structures would have to be exploited by expressive search techniques that go beyond today's keyword search paradigm. In this realm, as a first contribution of this thesis, we present NAGA (\textbf{N}ot \textbf{A}nother \textbf{G}oogle \textbf{A}nswer), a new semantic search engine. NAGA provides an expressive, graph-based query language that enables queries with entities and relationships. The results are retrieved based on subgraph matching techniques and ranked by means of a statistical ranking model. As a second contribution, we present STAR (\textbf{S}teiner \textbf{T}ree \textbf{A}pproximation in \textbf{R}elationship Graphs), an efficient technique for finding ``close'' relations (i.e., compact connections) between $k(\geq 2)$ entities of interest in large entity-relationship graphs. Our third contribution is MING (\textbf{M}ining\textbf{In}formative \textbf{G}raphs). MING is an efficient method for retrieving ``informative'' subgraphs for $k(\geq 2)$ entities of interest from an entity-relationship graph. Intuitively, these would be subgraphs that can explain the relations between the $k$ entities of interest. The knowledge discovery tasks supported by MING have a stronger semantic flavor than the ones supported by STAR. STAR and MING are integrated into the query answering component of the NAGA engine. NAGA itself is a fully implemented prototype system and is part of the YAGO-NAGA project.}, }
Endnote
%0 Thesis %A Kasneci, Gjergji %Y Weikum, Gerhard %A referee: Dittrich, Jens %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Searching and Ranking in Entity-Relationship Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17AE-D %F EDOC: 520410 %F OTHER: Local-ID: C1256DBF005F876D-2C25044AD8D088FDC125763B003D5810-KasneciPhD2009 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X The Web bears the potential to become the world's most comprehensive knowledge base. Organizing information from the Web into entity-relationship graph structures could be a first step towards unleashing this potential. In a second step, the inherent semantics of such structures would have to be exploited by expressive search techniques that go beyond today's keyword search paradigm. In this realm, as a first contribution of this thesis, we present NAGA (\textbf{N}ot \textbf{A}nother \textbf{G}oogle \textbf{A}nswer), a new semantic search engine. NAGA provides an expressive, graph-based query language that enables queries with entities and relationships. The results are retrieved based on subgraph matching techniques and ranked by means of a statistical ranking model. As a second contribution, we present STAR (\textbf{S}teiner \textbf{T}ree \textbf{A}pproximation in \textbf{R}elationship Graphs), an efficient technique for finding ``close'' relations (i.e., compact connections) between $k(\geq 2)$ entities of interest in large entity-relationship graphs. Our third contribution is MING (\textbf{M}ining\textbf{In}formative \textbf{G}raphs). MING is an efficient method for retrieving ``informative'' subgraphs for $k(\geq 2)$ entities of interest from an entity-relationship graph. Intuitively, these would be subgraphs that can explain the relations between the $k$ entities of interest. The knowledge discovery tasks supported by MING have a stronger semantic flavor than the ones supported by STAR. STAR and MING are integrated into the query answering component of the NAGA engine. NAGA itself is a fully implemented prototype system and is part of the YAGO-NAGA project. %U http://scidok.sulb.uni-saarland.de/volltexte/2010/2964/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[154]
M. Kerber, “Geometric Algorithms for Algebraic Curves and Surfaces,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
This work presents novel geometric algorithms dealing with algebraic curves and surfaces of arbitrary degree. These algorithms are exact and complete � they return the mathematically true result for all input instances. Efficiency is achieved by cutting back expensive symbolic computation and favoring combinatorial and adaptive numerical methods instead, without spoiling exactness in the overall result. We present an algorithm for computing planar arrangements induced by real algebraic curves.We show its efficiency both in theory by a complexity analysis, as well as in practice by experimental comparison with related methods. For the latter, our solution has been implemented in the context of the Cgal library. The results show that it constitutes the best current exact implementation available for arrangements as well as for the related problem of computing the topology of one algebraic curve. The algorithm is also applied to related problems, such as arrangements of rotated curves, and arrangments embedded on a parameterized surface. In R3, we propose a new method to compute an isotopic triangulation of an algebraic surface. This triangulation is based on a stratification of the surface, which reveals topological and geometric information. Our implementation is the first for this problem that makes consequent use of numerical methods, and still yields the exact topology of the surface. The thesis is written in English.
Export
BibTeX
@phdthesis{Kerber2009, TITLE = {Geometric Algorithms for Algebraic Curves and Surfaces}, AUTHOR = {Kerber, Michael}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {This work presents novel geometric algorithms dealing with algebraic curves and surfaces of arbitrary degree. These algorithms are exact and complete {\diamond} they return the mathematically true result for all input instances. Efficiency is achieved by cutting back expensive symbolic computation and favoring combinatorial and adaptive numerical methods instead, without spoiling exactness in the overall result. We present an algorithm for computing planar arrangements induced by real algebraic curves.We show its efficiency both in theory by a complexity analysis, as well as in practice by experimental comparison with related methods. For the latter, our solution has been implemented in the context of the Cgal library. The results show that it constitutes the best current exact implementation available for arrangements as well as for the related problem of computing the topology of one algebraic curve. The algorithm is also applied to related problems, such as arrangements of rotated curves, and arrangments embedded on a parameterized surface. In R3, we propose a new method to compute an isotopic triangulation of an algebraic surface. This triangulation is based on a stratification of the surface, which reveals topological and geometric information. Our implementation is the first for this problem that makes consequent use of numerical methods, and still yields the exact topology of the surface. The thesis is written in English.}, }
Endnote
%0 Thesis %A Kerber, Michael %Y Mehlhorn, Kurt %A referee: Yap, Chee-Keng %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Geometric Algorithms for Algebraic Curves and Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B4BC-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X This work presents novel geometric algorithms dealing with algebraic curves and surfaces of arbitrary degree. These algorithms are exact and complete &#65533; they return the mathematically true result for all input instances. Efficiency is achieved by cutting back expensive symbolic computation and favoring combinatorial and adaptive numerical methods instead, without spoiling exactness in the overall result. We present an algorithm for computing planar arrangements induced by real algebraic curves.We show its efficiency both in theory by a complexity analysis, as well as in practice by experimental comparison with related methods. For the latter, our solution has been implemented in the context of the Cgal library. The results show that it constitutes the best current exact implementation available for arrangements as well as for the related problem of computing the topology of one algebraic curve. The algorithm is also applied to related problems, such as arrangements of rotated curves, and arrangments embedded on a parameterized surface. In R3, we propose a new method to compute an isotopic triangulation of an algebraic surface. This triangulation is based on a stratification of the surface, which reveals topological and geometric information. Our implementation is the first for this problem that makes consequent use of numerical methods, and still yields the exact topology of the surface. The thesis is written in English. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/2949/
[155]
J. X. Parreira, “Decentralized Link Analysis in Peer-to-Peer Web Search Networks,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Analyzing the authority or reputation of entities that are connected by a graph structure and ranking these entities is an important issue that arises in the Web, in Web 2.0 communities, and in other applications. The problem is typically addressed by computing the dominant eigenvector of a matrix that is suitably derived from the underlying graph, or by performing a full spectral decomposition of the matrix. Although such analyses could be performed by a centralized server, there are good reasons that suggest running theses computations in a decentralized manner across many peers, like scalability, privacy, censorship, etc. There exist a number of approaches for speeding up the analysis by partitioning the graph into disjoint fragments. However, such methods are not suitable for a peer-to-peer network, where overlap among the fragments might occur. In addition, peer-to-peer approaches need to consider network characteristics, such as peers unaware of other peers' contents, susceptibility to malicious attacks, and network dynamics (so-called churn). In this thesis we make the following major contributions. We present JXP, a decentralized algorithm for computing authority scores of entities distributed in a peer-to-peer (P2P) network that allows peers to have overlapping content and requires no a priori knowledge of other peers' content. We also show the benefits of JXP in the Minerva distributed Web search engine. We present an extension of JXP, coined \emph{TrustJXP}, that contains a reputation model in order to deal with misbehaving peers. We present another extension of JXP, that handles dynamics on peer-to-peer networks, as well as an algorithm for estimating the current number of entities in the network. This thesis also presents novel methods for embedding JXP in peer-to-peer networks and applications. We present an approach for creating links among peers, forming \emph{semantic overlay networks}, where peers are free to decide which connections they create and which they want to avoid based on various usefulness estimators. We show how peer-to-peer applications, like the JXP algorithm, can greatly benefit from these additional semantic relations.
Export
BibTeX
@phdthesis{XavierParreiraPhD2009, TITLE = {Decentralized Link Analysis in Peer-to-Peer Web Search Networks}, AUTHOR = {Parreira, Josiane Xavier}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-25626}, LOCALID = {Local-ID: C1256DBF005F876D-DE05276B6A7BAD09C125763F002C221C-XavierParreiraPhD2009}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Analyzing the authority or reputation of entities that are connected by a graph structure and ranking these entities is an important issue that arises in the Web, in Web 2.0 communities, and in other applications. The problem is typically addressed by computing the dominant eigenvector of a matrix that is suitably derived from the underlying graph, or by performing a full spectral decomposition of the matrix. Although such analyses could be performed by a centralized server, there are good reasons that suggest running theses computations in a decentralized manner across many peers, like scalability, privacy, censorship, etc. There exist a number of approaches for speeding up the analysis by partitioning the graph into disjoint fragments. However, such methods are not suitable for a peer-to-peer network, where overlap among the fragments might occur. In addition, peer-to-peer approaches need to consider network characteristics, such as peers unaware of other peers' contents, susceptibility to malicious attacks, and network dynamics (so-called churn). In this thesis we make the following major contributions. We present JXP, a decentralized algorithm for computing authority scores of entities distributed in a peer-to-peer (P2P) network that allows peers to have overlapping content and requires no a priori knowledge of other peers' content. We also show the benefits of JXP in the Minerva distributed Web search engine. We present an extension of JXP, coined \emph{TrustJXP}, that contains a reputation model in order to deal with misbehaving peers. We present another extension of JXP, that handles dynamics on peer-to-peer networks, as well as an algorithm for estimating the current number of entities in the network. This thesis also presents novel methods for embedding JXP in peer-to-peer networks and applications. We present an approach for creating links among peers, forming \emph{semantic overlay networks}, where peers are free to decide which connections they create and which they want to avoid based on various usefulness estimators. We show how peer-to-peer applications, like the JXP algorithm, can greatly benefit from these additional semantic relations.}, }
Endnote
%0 Thesis %A Parreira, Josiane Xavier %Y Weikum, Gerhard %A referee: Bast, Holger %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Decentralized Link Analysis in Peer-to-Peer Web Search Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17B9-4 %F EDOC: 520411 %F OTHER: Local-ID: C1256DBF005F876D-DE05276B6A7BAD09C125763F002C221C-XavierParreiraPhD2009 %U urn:nbn:de:bsz:291-scidok-25626 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X Analyzing the authority or reputation of entities that are connected by a graph structure and ranking these entities is an important issue that arises in the Web, in Web 2.0 communities, and in other applications. The problem is typically addressed by computing the dominant eigenvector of a matrix that is suitably derived from the underlying graph, or by performing a full spectral decomposition of the matrix. Although such analyses could be performed by a centralized server, there are good reasons that suggest running theses computations in a decentralized manner across many peers, like scalability, privacy, censorship, etc. There exist a number of approaches for speeding up the analysis by partitioning the graph into disjoint fragments. However, such methods are not suitable for a peer-to-peer network, where overlap among the fragments might occur. In addition, peer-to-peer approaches need to consider network characteristics, such as peers unaware of other peers' contents, susceptibility to malicious attacks, and network dynamics (so-called churn). In this thesis we make the following major contributions. We present JXP, a decentralized algorithm for computing authority scores of entities distributed in a peer-to-peer (P2P) network that allows peers to have overlapping content and requires no a priori knowledge of other peers' content. We also show the benefits of JXP in the Minerva distributed Web search engine. We present an extension of JXP, coined \emph{TrustJXP}, that contains a reputation model in order to deal with misbehaving peers. We present another extension of JXP, that handles dynamics on peer-to-peer networks, as well as an algorithm for estimating the current number of entities in the network. This thesis also presents novel methods for embedding JXP in peer-to-peer networks and applications. We present an approach for creating links among peers, forming \emph{semantic overlay networks}, where peers are free to decide which connections they create and which they want to avoid based on various usefulness estimators. We show how peer-to-peer applications, like the JXP algorithm, can greatly benefit from these additional semantic relations. %U http://scidok.sulb.uni-saarland.de/volltexte/2009/2562/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[156]
S. Ray, “Weak and Strong ε-Nets for Geometric Range Spaces,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
This thesis deals with strong and weak -nets in geometry and related problems. In the first half of the thesis we look at strong -nets and the closely related problem of finding minimum hitting sets. We give a new technique for proving the existence of small -nets for several geometric range spaces. Our technique also gives efficient algorithms to compute small -nets. By a well known reduction due to Bronimann and Goodrich [10], our results imply constant factor approximation algorithms for the corresponding minimum hitting set problems. We show how the approximation factor given by this standard technique can be improved by giving the first polynomial time approximation scheme for some of the minimum hitting set problems. The algorithm is a very simple and is based on local search. In the second half of the thesis, we turn to weak - nets, a very important generalization of the idea of strong -nets for convex ranges. We first consider the simplest example of a weak -net, namely the centerpoint. We give a new and arguably simpler proof of the well known centerpoint theorem (and also Helly�s theorem) in any dimension and use the same idea to prove an optimal generalization of the centerpoint to two points in the plane. Our technique also gives several improved results for small weak -nets in the plane. We finally look at the general weak -net problem is d-dimensions. A long standing conjecture states that weak -nets of size O( 1polylog1) exist for convex sets in any dimension. It turns out that if the conjecture is true then it should be possible to construct a weak -net from a small number of input points. We show that this is indeed true and it is possible to construct a weak -net from O(1polylog1) input points. We also show an interesting connection between weak and strong -nets which shows how random sampling can be used to construct weak -nets.
Export
BibTeX
@phdthesis{Ray2009, TITLE = {Weak and Strong $\varepsilon$-Nets for Geometric Range Spaces}, AUTHOR = {Ray, Saurabh}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {This thesis deals with strong and weak -nets in geometry and related problems. In the first half of the thesis we look at strong -nets and the closely related problem of finding minimum hitting sets. We give a new technique for proving the existence of small -nets for several geometric range spaces. Our technique also gives efficient algorithms to compute small -nets. By a well known reduction due to Bronimann and Goodrich [10], our results imply constant factor approximation algorithms for the corresponding minimum hitting set problems. We show how the approximation factor given by this standard technique can be improved by giving the first polynomial time approximation scheme for some of the minimum hitting set problems. The algorithm is a very simple and is based on local search. In the second half of the thesis, we turn to weak - nets, a very important generalization of the idea of strong -nets for convex ranges. We first consider the simplest example of a weak -net, namely the centerpoint. We give a new and arguably simpler proof of the well known centerpoint theorem (and also Helly{\diamond}s theorem) in any dimension and use the same idea to prove an optimal generalization of the centerpoint to two points in the plane. Our technique also gives several improved results for small weak -nets in the plane. We finally look at the general weak -net problem is d-dimensions. A long standing conjecture states that weak -nets of size O( 1polylog1) exist for convex sets in any dimension. It turns out that if the conjecture is true then it should be possible to construct a weak -net from a small number of input points. We show that this is indeed true and it is possible to construct a weak -net from O(1polylog1) input points. We also show an interesting connection between weak and strong -nets which shows how random sampling can be used to construct weak -nets.}, }
Endnote
%0 Thesis %A Ray, Saurabh %Y Seidel, Raimund %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Weak and Strong &#949;-Nets for Geometric Range Spaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B4CA-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X This thesis deals with strong and weak -nets in geometry and related problems. In the first half of the thesis we look at strong -nets and the closely related problem of finding minimum hitting sets. We give a new technique for proving the existence of small -nets for several geometric range spaces. Our technique also gives efficient algorithms to compute small -nets. By a well known reduction due to Bronimann and Goodrich [10], our results imply constant factor approximation algorithms for the corresponding minimum hitting set problems. We show how the approximation factor given by this standard technique can be improved by giving the first polynomial time approximation scheme for some of the minimum hitting set problems. The algorithm is a very simple and is based on local search. In the second half of the thesis, we turn to weak - nets, a very important generalization of the idea of strong -nets for convex ranges. We first consider the simplest example of a weak -net, namely the centerpoint. We give a new and arguably simpler proof of the well known centerpoint theorem (and also Helly&#65533;s theorem) in any dimension and use the same idea to prove an optimal generalization of the centerpoint to two points in the plane. Our technique also gives several improved results for small weak -nets in the plane. We finally look at the general weak -net problem is d-dimensions. A long standing conjecture states that weak -nets of size O( 1polylog1) exist for convex sets in any dimension. It turns out that if the conjecture is true then it should be possible to construct a weak -net from a small number of input points. We show that this is indeed true and it is possible to construct a weak -net from O(1polylog1) input points. We also show an interesting connection between weak and strong -nets which shows how random sampling can be used to construct weak -nets. %U http://scidok.sulb.uni-saarland.de/volltexte/2009/2567/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[157]
T. Ritschel, “Perceptually-motivated, Interactive Rendering and Editing of Global Illumination,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
This thesis proposes several new perceptually-motivated techniques to synthesize, edit and enhance depiction of three-dimensional virtual scenes. Finding algorithms that fit the perceptually economic middle ground between artistic depiction and full physical simulation is the challenge taken in this work. First, we will present three interactive global illumination rendering approaches that are inspired by perception to efficiently depict important light transport. Those methods have in common to compute global illumination in large and fully dynamic scenes allowing for light, geometry, and material changes at interactive or real-time rates. Further, this thesis proposes a tool to edit reflections, that allows to bend physical laws to match artistic goals by exploiting perception. Finally, this work contributes a post-processing operator that depicts high contrast scenes in the same way as artists do, by simulating it ``seen'' through a dynamic virtual human eye in real-time.
Export
BibTeX
@phdthesis{Ritschel2009Thesis, TITLE = {Perceptually-motivated, Interactive Rendering and Editing of Global Illumination}, AUTHOR = {Ritschel, Tobias}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-EE93446D691DFB45C12576C5003C9355-Ritschel2009Thesis}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {This thesis proposes several new perceptually-motivated techniques to synthesize, edit and enhance depiction of three-dimensional virtual scenes. Finding algorithms that fit the perceptually economic middle ground between artistic depiction and full physical simulation is the challenge taken in this work. First, we will present three interactive global illumination rendering approaches that are inspired by perception to efficiently depict important light transport. Those methods have in common to compute global illumination in large and fully dynamic scenes allowing for light, geometry, and material changes at interactive or real-time rates. Further, this thesis proposes a tool to edit reflections, that allows to bend physical laws to match artistic goals by exploiting perception. Finally, this work contributes a post-processing operator that depicts high contrast scenes in the same way as artists do, by simulating it ``seen'' through a dynamic virtual human eye in real-time.}, }
Endnote
%0 Thesis %A Ritschel, Tobias %Y Seidel, Hans-Peter %A referee: Kautz, Jan %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptually-motivated, Interactive Rendering and Editing of Global Illumination : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17AB-4 %F EDOC: 520440 %F OTHER: Local-ID: C125675300671F7B-EE93446D691DFB45C12576C5003C9355-Ritschel2009Thesis %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X This thesis proposes several new perceptually-motivated techniques to synthesize, edit and enhance depiction of three-dimensional virtual scenes. Finding algorithms that fit the perceptually economic middle ground between artistic depiction and full physical simulation is the challenge taken in this work. First, we will present three interactive global illumination rendering approaches that are inspired by perception to efficiently depict important light transport. Those methods have in common to compute global illumination in large and fully dynamic scenes allowing for light, geometry, and material changes at interactive or real-time rates. Further, this thesis proposes a tool to edit reflections, that allows to bend physical laws to match artistic goals by exploiting perception. Finally, this work contributes a post-processing operator that depicts high contrast scenes in the same way as artists do, by simulating it ``seen'' through a dynamic virtual human eye in real-time. %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3153/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[158]
O. Schall, “Robust and Efficient Processing Techniques for Static and Dynamic Geometric Data,” Universität des Saarlandes, Saarbrücken, 2009.
Export
BibTeX
@phdthesis{Schall2009:Thesis, TITLE = {Robust and Efficient Processing Techniques for Static and Dynamic Geometric Data}, AUTHOR = {Schall, Oliver}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-8F850A2B04493834C12576C70034ABCA-Schall2009:Thesis}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, }
Endnote
%0 Thesis %A Schall, Oliver %Y Seidel, Hans-Peter %A referee: Belyaev, Alexander %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Robust and Efficient Processing Techniques for Static and Dynamic Geometric Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17B6-A %F EDOC: 520444 %F OTHER: Local-ID: C125675300671F7B-8F850A2B04493834C12576C70034ABCA-Schall2009:Thesis %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd
[159]
T. Schultz, “Feature Extraction for Visual Analysis of DW-MRI Data,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) is a recent modality to investigate the major neuronal pathways of the human brain. However, the rich DW-MRI datasets cannot be interpreted without proper preprocessing. In order to achieve under- standable visualizations, this dissertation reduces the complex data to relevant features. The first part is inspired by topological features in flow data. Novel features reconstruct fuzzy fiber bundle geometry from probabilistic tractography results. The topological prop- erties of existing features that extract the skeleton of white matter tracts are clarified, and the core of regions with planar diffusion is visualized. The second part builds on methods from computer vision. Relevant boundaries in the data are identified via regularized eigenvalue derivatives, and boundary information is used to segment anisotropy isosurfaces into meaningful regions. A higher-order structure tensor is shown to be an accurate descriptor of local structure in diffusion data. The third part is concerned with fiber tracking. Streamline visualizations are improved by adding features from structural MRI in a way that emphasizes the relation between the two types of data, and the accuracy of streamlines in high angular resolution data is increased by modeling the estimation of crossing fiber bundles as a low-rank tensor approximation problem.
Export
BibTeX
@phdthesis{Schultz:PhD09, TITLE = {Feature Extraction for Visual Analysis of {DW-MRI} Data}, AUTHOR = {Schultz, Thomas}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-75D5B043A5F12F5CC12576A500567C6B-Schultz:PhD09}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) is a recent modality to investigate the major neuronal pathways of the human brain. However, the rich DW-MRI datasets cannot be interpreted without proper preprocessing. In order to achieve under- standable visualizations, this dissertation reduces the complex data to relevant features. The first part is inspired by topological features in flow data. Novel features reconstruct fuzzy fiber bundle geometry from probabilistic tractography results. The topological prop- erties of existing features that extract the skeleton of white matter tracts are clarified, and the core of regions with planar diffusion is visualized. The second part builds on methods from computer vision. Relevant boundaries in the data are identified via regularized eigenvalue derivatives, and boundary information is used to segment anisotropy isosurfaces into meaningful regions. A higher-order structure tensor is shown to be an accurate descriptor of local structure in diffusion data. The third part is concerned with fiber tracking. Streamline visualizations are improved by adding features from structural MRI in a way that emphasizes the relation between the two types of data, and the accuracy of streamlines in high angular resolution data is increased by modeling the estimation of crossing fiber bundles as a low-rank tensor approximation problem.}, }
Endnote
%0 Thesis %A Schultz, Thomas %Y Seidel, Hans-Peter %A referee: Theisel, Holger %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Feature Extraction for Visual Analysis of DW-MRI Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17C1-0 %F EDOC: 520462 %F OTHER: Local-ID: C125675300671F7B-75D5B043A5F12F5CC12576A500567C6B-Schultz:PhD09 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) is a recent modality to investigate the major neuronal pathways of the human brain. However, the rich DW-MRI datasets cannot be interpreted without proper preprocessing. In order to achieve under- standable visualizations, this dissertation reduces the complex data to relevant features. The first part is inspired by topological features in flow data. Novel features reconstruct fuzzy fiber bundle geometry from probabilistic tractography results. The topological prop- erties of existing features that extract the skeleton of white matter tracts are clarified, and the core of regions with planar diffusion is visualized. The second part builds on methods from computer vision. Relevant boundaries in the data are identified via regularized eigenvalue derivatives, and boundary information is used to segment anisotropy isosurfaces into meaningful regions. A higher-order structure tensor is shown to be an accurate descriptor of local structure in diffusion data. The third part is concerned with fiber tracking. Streamline visualizations are improved by adding features from structural MRI in a way that emphasizes the relation between the two types of data, and the accuracy of streamlines in high angular resolution data is increased by modeling the estimation of crossing fiber bundles as a low-rank tensor approximation problem.
[160]
P. Schweitzer, “Problems of Unknown Complexity: Graph isomorphism and Ramsey theoretic numbers,” Universität des Saarlandes, Saarbrücken, 2009.
Export
BibTeX
@phdthesis{SchweitzerPhD2009, TITLE = {Problems of Unknown Complexity: Graph isomorphism and Ramsey theoretic numbers}, AUTHOR = {Schweitzer, Pascal}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-24256}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, }
Endnote
%0 Thesis %A Schweitzer, Pascal %Y Mehlhorn, Kurt %A referee: Bl&#228;ser, Markus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Problems of Unknown Complexity: Graph isomorphism and Ramsey theoretic numbers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-DCF8-6 %U urn:nbn:de:bsz:291-scidok-24256 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2009/2425/
[161]
C. Stoll, “Template Based Shape Processing,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
As computers can only represent and process discrete data, information gathered from the real world always has to be sampled. While it is nowadays possible to sample many signals accurately and thus generate high-quality reconstructions (for example of images and audio data), accurately and densely sampling 3D geometry is still a challenge. The signal samples may be corrupted by noise and outliers, and contain large holes due to occlusions. These issues become even more pronounced when also considering the temporal domain. Because of this, developing methods for accurate reconstruction of shapes from a sparse set of discrete data is an important aspect of the computer graphics processing pipeline. In this thesis we propose novel approaches to including semantic knowledge into reconstruction processes using template based shape processing. We formulate shape reconstruction as a deformable template fitting process, where we try to fit a given template model to the sampled data. This approach allows us to present novel solutions to several fundamental problems in the area of shape reconstruction. We address static problems like constrained texture mapping and semantically meaningful hole-filling in surface reconstruction from 3D scans, temporal problems such as mesh based performance capture, and finally dynamic problems like the estimation of physically based material parameters of animated templates.
Export
BibTeX
@phdthesis{Stoll2009, TITLE = {Template Based Shape Processing}, AUTHOR = {Stoll, Carsten}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-2BE830AF307A8850C12576C5004A78A0-Stoll2009}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {As computers can only represent and process discrete data, information gathered from the real world always has to be sampled. While it is nowadays possible to sample many signals accurately and thus generate high-quality reconstructions (for example of images and audio data), accurately and densely sampling 3D geometry is still a challenge. The signal samples may be corrupted by noise and outliers, and contain large holes due to occlusions. These issues become even more pronounced when also considering the temporal domain. Because of this, developing methods for accurate reconstruction of shapes from a sparse set of discrete data is an important aspect of the computer graphics processing pipeline. In this thesis we propose novel approaches to including semantic knowledge into reconstruction processes using template based shape processing. We formulate shape reconstruction as a deformable template fitting process, where we try to fit a given template model to the sampled data. This approach allows us to present novel solutions to several fundamental problems in the area of shape reconstruction. We address static problems like constrained texture mapping and semantically meaningful hole-filling in surface reconstruction from 3D scans, temporal problems such as mesh based performance capture, and finally dynamic problems like the estimation of physically based material parameters of animated templates.}, }
Endnote
%0 Thesis %A Stoll, Carsten %Y Seidel, Hans-Peter %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Template Based Shape Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17B4-E %F EDOC: 520449 %F OTHER: Local-ID: C125675300671F7B-2BE830AF307A8850C12576C5004A78A0-Stoll2009 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X As computers can only represent and process discrete data, information gathered from the real world always has to be sampled. While it is nowadays possible to sample many signals accurately and thus generate high-quality reconstructions (for example of images and audio data), accurately and densely sampling 3D geometry is still a challenge. The signal samples may be corrupted by noise and outliers, and contain large holes due to occlusions. These issues become even more pronounced when also considering the temporal domain. Because of this, developing methods for accurate reconstruction of shapes from a sparse set of discrete data is an important aspect of the computer graphics processing pipeline. In this thesis we propose novel approaches to including semantic knowledge into reconstruction processes using template based shape processing. We formulate shape reconstruction as a deformable template fitting process, where we try to fit a given template model to the sampled data. This approach allows us to present novel solutions to several fundamental problems in the area of shape reconstruction. We address static problems like constrained texture mapping and semantically meaningful hole-filling in surface reconstruction from 3D scans, temporal problems such as mesh based performance capture, and finally dynamic problems like the estimation of physically based material parameters of animated templates. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/2965/
[162]
F. Suchanek, “Automated Construction and Growth of a Large Ontology,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
An ontology is a computer-processable collection of knowledge about the world. This thesis explains how an ontology can be constructed and expanded automatically. The proposed approach consists of three contributions: \begin{enumerate} \item A core ontology, YAGO.\\ YAGO is an ontology that has been constructed automatically. It combines high accuracy with large coverage and serves as a core that can be expanded. \item A tool for information extraction, \leila.\\ \leila\ is a system that can extract knowledge from natural language texts. \leila\ will be used to find new facts for YAGO. \item An integration mechanism, SOFIE.\\ SOFIE is a system that can reason on the plausibility of new knowledge. SOFIE will assess the facts found by \leila\ and integrate them into YAGO. \end{enumerate} Each of these components comes with a fully implemented system. Together, they form an integrative architecture, which does not only gather new facts, but also reconcile them with the existing facts. The result is an ever-growing, yet highly accurate ontological knowledge base. A survey of applications of the ontology completes the thesis.
Export
BibTeX
@phdthesis{Suchanek2008, TITLE = {Automated Construction and Growth of a Large Ontology}, AUTHOR = {Suchanek, Fabian}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-88F087968BD86FD3C125755C0057599D-Suchanek2008}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {An ontology is a computer-processable collection of knowledge about the world. This thesis explains how an ontology can be constructed and expanded automatically. The proposed approach consists of three contributions: \begin{enumerate} \item A core ontology, YAGO.\\ YAGO is an ontology that has been constructed automatically. It combines high accuracy with large coverage and serves as a core that can be expanded. \item A tool for information extraction, \leila.\\ \leila\ is a system that can extract knowledge from natural language texts. \leila\ will be used to find new facts for YAGO. \item An integration mechanism, SOFIE.\\ SOFIE is a system that can reason on the plausibility of new knowledge. SOFIE will assess the facts found by \leila\ and integrate them into YAGO. \end{enumerate} Each of these components comes with a fully implemented system. Together, they form an integrative architecture, which does not only gather new facts, but also reconcile them with the existing facts. The result is an ever-growing, yet highly accurate ontological knowledge base. A survey of applications of the ontology completes the thesis.}, }
Endnote
%0 Thesis %A Suchanek, Fabian %Y Weikum, Gerhard %A referee: Studer, Rudi %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Automated Construction and Growth of a Large Ontology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A8C-1 %F EDOC: 428323 %F OTHER: Local-ID: C125756E0038A185-88F087968BD86FD3C125755C0057599D-Suchanek2008 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V phd %9 phd %X An ontology is a computer-processable collection of knowledge about the world. This thesis explains how an ontology can be constructed and expanded automatically. The proposed approach consists of three contributions: \begin{enumerate} \item A core ontology, YAGO.\\ YAGO is an ontology that has been constructed automatically. It combines high accuracy with large coverage and serves as a core that can be expanded. \item A tool for information extraction, \leila.\\ \leila\ is a system that can extract knowledge from natural language texts. \leila\ will be used to find new facts for YAGO. \item An integration mechanism, SOFIE.\\ SOFIE is a system that can reason on the plausibility of new knowledge. SOFIE will assess the facts found by \leila\ and integrate them into YAGO. \end{enumerate} Each of these components comes with a fully implemented system. Together, they form an integrative architecture, which does not only gather new facts, but also reconcile them with the existing facts. The result is an ever-growing, yet highly accurate ontological knowledge base. A survey of applications of the ontology completes the thesis.
2008
[163]
D. Ajwani, “Traversing Large Graphs in Realistic Settings,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{AjwaniPhD08, TITLE = {Traversing Large Graphs in Realistic Settings}, AUTHOR = {Ajwani, Deepak}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-22357}, LOCALID = {Local-ID: C125756E0038A185-C5763432F20D9486C125759000260238-AjwaniPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Ajwani, Deepak %A referee: Meyer, Ulrich %Y Mehlhorn, Kurt %A referee: Brodal, Gerth %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Traversing Large Graphs in Realistic Settings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A8A-5 %F EDOC: 428311 %F OTHER: Local-ID: C125756E0038A185-C5763432F20D9486C125759000260238-AjwaniPhD08 %U urn:nbn:de:bsz:291-scidok-22357 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2009/2235/
[164]
T. Annen, “Efficient shadow map filtering,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{AnnenPhD08, TITLE = {Efficient shadow map filtering}, AUTHOR = {Annen, Thomas}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-AD3414388D765A23C125759000239E32-AnnenPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Annen, Thomas %Y Seidel, Hans-Peter %A referee: Kautz, Jan %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Efficient shadow map filtering : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A9C-E %F EDOC: 428312 %F OTHER: Local-ID: C125756E0038A185-AD3414388D765A23C125759000239E32-AnnenPhD08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2009/2121/
[165]
E. Berberich, “Robust and Efficient Software for Problems in 2.5-Dimensional Non-Linear Geometry - Algorithms and Implementations,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{BerberichPhD08, TITLE = {Robust and Efficient Software for Problems in 2.5-Dimensional Non-Linear Geometry -- Algorithms and Implementations}, AUTHOR = {Berberich, Eric}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-28AD87CE0CD28FBDC125759000278B55-BerberichPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Berberich, Eric %Y Mehlhorn, Kurt %A referee: Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Robust and Efficient Software for Problems in 2.5-Dimensional Non-Linear Geometry - Algorithms and Implementations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A86-D %F EDOC: 428313 %F OTHER: Local-ID: C125756E0038A185-28AD87CE0CD28FBDC125759000278B55-BerberichPhD08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2009/2230/
[166]
C. Bock, “Computational Epigenetics - Bioinformatic methods for epigenome prediction, DNA methylation mapping and cancer epigenetics,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{Bock2008b, TITLE = {Computational Epigenetics -- Bioinformatic methods for epigenome prediction, {DNA} methylation mapping and cancer epigenetics}, AUTHOR = {Bock, Christoph}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-49DE16446B8011EBC12575220079E05D-Bock2008b}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Bock, Christoph %Y Lengauer, Thomas %A referee: Walter, J&#246;rn %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Computational Epigenetics - Bioinformatic methods for epigenome prediction, DNA methylation mapping and cancer epigenetics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A92-2 %F EDOC: 428314 %F OTHER: Local-ID: C125756E0038A185-49DE16446B8011EBC12575220079E05D-Bock2008b %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2009/2049/
[167]
T. Chen, “New 3D Scanning Techniques for Complex Scenes,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
This thesis presents new 3D scanning methods for complex scenes, such as surfaces with fine-scale geometric details, translucent objects, low-albedo objects, glossy objects, scenes with interreflection, and discontinuous scenes. Starting from the observation that specular reflection is a reliable visual cue for surface mesostructure perception, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects. Translucent objects pose a difficult problem for traditional optical-based 3D scanning techniques. We analyze and compare two descattering methods, phaseshifting and polarization, and further present several phase-shifting and polarization based methods for high quality 3D scanning of translucent objects. We introduce the concept of modulation based separation, where a high frequency signal is multiplied on top of another signal. The modulated signal inherits the separation properties of the high frequency signal and allows us to remove artifacts due to global illumination. Thismethod can be used for efficient 3D scanning of scenes with significant subsurface scattering and interreflections.
Export
BibTeX
@phdthesis{Chen2008, TITLE = {New {3D} Scanning Techniques for Complex Scenes}, AUTHOR = {Chen, Tongbo}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {This thesis presents new 3D scanning methods for complex scenes, such as surfaces with fine-scale geometric details, translucent objects, low-albedo objects, glossy objects, scenes with interreflection, and discontinuous scenes. Starting from the observation that specular reflection is a reliable visual cue for surface mesostructure perception, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects. Translucent objects pose a difficult problem for traditional optical-based 3D scanning techniques. We analyze and compare two descattering methods, phaseshifting and polarization, and further present several phase-shifting and polarization based methods for high quality 3D scanning of translucent objects. We introduce the concept of modulation based separation, where a high frequency signal is multiplied on top of another signal. The modulated signal inherits the separation properties of the high frequency signal and allows us to remove artifacts due to global illumination. Thismethod can be used for efficient 3D scanning of scenes with significant subsurface scattering and interreflections.}, }
Endnote
%0 Thesis %A Chen, Tongbo %Y Seidel, Hans-Peter %A referee: Lensch, Hendrik P. A. %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T New 3D Scanning Techniques for Complex Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B549-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %X This thesis presents new 3D scanning methods for complex scenes, such as surfaces with fine-scale geometric details, translucent objects, low-albedo objects, glossy objects, scenes with interreflection, and discontinuous scenes. Starting from the observation that specular reflection is a reliable visual cue for surface mesostructure perception, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects. Translucent objects pose a difficult problem for traditional optical-based 3D scanning techniques. We analyze and compare two descattering methods, phaseshifting and polarization, and further present several phase-shifting and polarization based methods for high quality 3D scanning of translucent objects. We introduce the concept of modulation based separation, where a high frequency signal is multiplied on top of another signal. The modulated signal inherits the separation properties of the high frequency signal and allows us to remove artifacts due to global illumination. Thismethod can be used for efficient 3D scanning of scenes with significant subsurface scattering and interreflections.
[168]
E. de Aguiar, “Animation and Performance Capture Using Digitized Models,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{deAguiarPhD08, TITLE = {Animation and Performance Capture Using Digitized Models}, AUTHOR = {de Aguiar, Edilson}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-DB8F8643A5A41851C12575900027FD81-deAguiarPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A de Aguiar, Edilson %Y Seidel, Hans-Peter %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Animation and Performance Capture Using Digitized Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A80-A %F EDOC: 428315 %F OTHER: Local-ID: C125756E0038A185-DB8F8643A5A41851C12575900027FD81-deAguiarPhD08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %P XIV, 184 p. %V phd %9 phd
[169]
A. Eigenwillig, “Real root isolation for exact and approximate polynomials using Descartes’ rule of signs,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{EigenwilligPhD, TITLE = {Real root isolation for exact and approximate polynomials using Descartes' rule of signs}, AUTHOR = {Eigenwillig, Arno}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-BBF1AE2CBF2720CDC125758A003F2CE9-EigenwilligPhD}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Eigenwillig, Arno %Y Mehlhorn, Kurt %A referee: Seidel, Reimund %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Real root isolation for exact and approximate polynomials using Descartes' rule of signs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AB0-F %F EDOC: 428316 %F OTHER: Local-ID: C125756E0038A185-BBF1AE2CBF2720CDC125758A003F2CE9-EigenwilligPhD %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2010/3244/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[170]
M. Fuchs, “Advanced Methods for Relightable Scene Representations in Image Space,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{FuchsPhD08, TITLE = {Advanced Methods for Relightable Scene Representations in Image Space}, AUTHOR = {Fuchs, Martin}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-DB0EF714E47AA5F1C125759000291083-FuchsPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Fuchs, Martin %Y Seidel, Hans-Peter %A referee: Lensch, Hendrik P. A. %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Advanced Methods for Relightable Scene Representations in Image Space : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A9A-1 %F EDOC: 428317 %F OTHER: Local-ID: C125756E0038A185-DB0EF714E47AA5F1C125759000291083-FuchsPhD08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2009/2119/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[171]
C. Hartmann, “Modeling of Flexible Side Chains for Protein-Ligand Docking,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{HartmannPhD08, TITLE = {Modeling of Flexible Side Chains for Protein-Ligand Docking}, AUTHOR = {Hartmann, Christoph}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-0030EB3C6BA8ADC1C125759000246714-HartmannPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Hartmann, Christoph %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Modeling of Flexible Side Chains for Protein-Ligand Docking : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AAA-E %F EDOC: 428318 %F OTHER: Local-ID: C125756E0038A185-0030EB3C6BA8ADC1C125759000246714-HartmannPhD08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/3214/
[172]
T. Hillenbrand, “Superposition and Decision Procedures - Back and Forth,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
Two apparently different approaches to automating deduction are mentioned in the title; they are the subject of a debate on ``big engines vs.\ little engines of proof''. The contributions in this thesis advocate that these two strands of research can interplay in subtle and sometimes unexpected ways, such that mutual pervasion can lead to intriguing results: Firstly, superposition can be run on top of decision procedures. This we demonstrate for the class of Shostak theories, incorporating a little engine into a big one. As another instance of decision procedures within superposition, we show that ground confluent rewrite systems, which decide entailment problems in equational logic, can be harnessed for detecting redundancies in superposition derivations. Secondly, superposition can be employed as proof-theoretic means underneath combined decision procedures: We re-establish the correctness of the Nelson-Oppen procedure as an instance of the completeness of superposition. Thirdly, superposition can be used as a decision procedure for many interesting theories, turning a big engine into a little one. For the theory of bits and of fixed-size bitvectors, we suggest a rephrased axiomatization combined with a transformation of conjectures, based on which superposition decides the universal fragment. Furthermore, with a modification of lifting, we adapt superposition to the theory of bounded domains and give a decision procedure, which captures the Bernays-Schönfinkel class as well.
Export
BibTeX
@phdthesis{HillenbrandDiss2008, TITLE = {Superposition and Decision Procedures -- Back and Forth}, AUTHOR = {Hillenbrand, Thomas}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Two apparently different approaches to automating deduction are mentioned in the title; they are the subject of a debate on ``big engines vs.\ little engines of proof''. The contributions in this thesis advocate that these two strands of research can interplay in subtle and sometimes unexpected ways, such that mutual pervasion can lead to intriguing results: Firstly, superposition can be run on top of decision procedures. This we demonstrate for the class of Shostak theories, incorporating a little engine into a big one. As another instance of decision procedures within superposition, we show that ground confluent rewrite systems, which decide entailment problems in equational logic, can be harnessed for detecting redundancies in superposition derivations. Secondly, superposition can be employed as proof-theoretic means underneath combined decision procedures: We re-establish the correctness of the Nelson-Oppen procedure as an instance of the completeness of superposition. Thirdly, superposition can be used as a decision procedure for many interesting theories, turning a big engine into a little one. For the theory of bits and of fixed-size bitvectors, we suggest a rephrased axiomatization combined with a transformation of conjectures, based on which superposition decides the universal fragment. Furthermore, with a modification of lifting, we adapt superposition to the theory of bounded domains and give a decision procedure, which captures the Bernays-Sch{\"o}nfinkel class as well.}, }
Endnote
%0 Thesis %A Hillenbrand, Thomas %Y Weidenbach, Christoph %A referee: Finkbeiner, Bernd %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Superposition and Decision Procedures - Back and Forth : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-21ED-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %X Two apparently different approaches to automating deduction are mentioned in the title; they are the subject of a debate on ``big engines vs.\ little engines of proof''. The contributions in this thesis advocate that these two strands of research can interplay in subtle and sometimes unexpected ways, such that mutual pervasion can lead to intriguing results: Firstly, superposition can be run on top of decision procedures. This we demonstrate for the class of Shostak theories, incorporating a little engine into a big one. As another instance of decision procedures within superposition, we show that ground confluent rewrite systems, which decide entailment problems in equational logic, can be harnessed for detecting redundancies in superposition derivations. Secondly, superposition can be employed as proof-theoretic means underneath combined decision procedures: We re-establish the correctness of the Nelson-Oppen procedure as an instance of the completeness of superposition. Thirdly, superposition can be used as a decision procedure for many interesting theories, turning a big engine into a little one. For the theory of bits and of fixed-size bitvectors, we suggest a rephrased axiomatization combined with a transformation of conjectures, based on which superposition decides the universal fragment. Furthermore, with a modification of lifting, we adapt superposition to the theory of bounded domains and give a decision procedure, which captures the Bernays-Sch&#246;nfinkel class as well. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2009/2419/
[173]
S. Knapp, “The Correctness of a Distributed Real-Time System,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
In this thesis we review and extend the pervasive correctness proof for an asynchronous distributed real-time system published in [KP07a]. We take a two-step approach: first, we argue about a single electronic control unit (ECU) consisting of a processor (running the OSEK time-like operating system OLOS) and a FlexRay-like interface called automotive bus controller (ABC). We extend [KP07a] among others by a local OLOS model [Kna08] and go into details regarding the handling of interrupts and the treatment of devices. Second, we connect several ECUs via the ABCs and reason about the complete distributed system, see also [KP07b]. Note that the formalization of the scheduling correctness is reported in [ABK08b]. Through several abstraction layers we prove the correctness of the distributed system with respect to a new lock-step model COA that completely abstracts from the ABCs. By establishing the DISTR model [Kna08] it becomes possible to literally reuse the arguments from the first part of this thesis and therefore to simplify the analysis of the complete distributed system. To illustrate the applicability of DISTR, we have formally proven the top-level correctness theorem in the theorem prover Isabelle/HOL. Throughout the thesis we tie together theorems regarding: processor, ABC, compiler, micro kernel, operating system, and the worst case execution time analysis of applications and systems software.
Export
BibTeX
@phdthesis{Knapp2008, TITLE = {The Correctness of a Distributed Real-Time System}, AUTHOR = {Knapp, Steffen}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {In this thesis we review and extend the pervasive correctness proof for an asynchronous distributed real-time system published in [KP07a]. We take a two-step approach: first, we argue about a single electronic control unit (ECU) consisting of a processor (running the OSEK time-like operating system OLOS) and a FlexRay-like interface called automotive bus controller (ABC). We extend [KP07a] among others by a local OLOS model [Kna08] and go into details regarding the handling of interrupts and the treatment of devices. Second, we connect several ECUs via the ABCs and reason about the complete distributed system, see also [KP07b]. Note that the formalization of the scheduling correctness is reported in [ABK08b]. Through several abstraction layers we prove the correctness of the distributed system with respect to a new lock-step model COA that completely abstracts from the ABCs. By establishing the DISTR model [Kna08] it becomes possible to literally reuse the arguments from the first part of this thesis and therefore to simplify the analysis of the complete distributed system. To illustrate the applicability of DISTR, we have formally proven the top-level correctness theorem in the theorem prover Isabelle/HOL. Throughout the thesis we tie together theorems regarding: processor, ABC, compiler, micro kernel, operating system, and the worst case execution time analysis of applications and systems software.}, }
Endnote
%0 Thesis %A Knapp, Steffen %Y Paul, Wolfgang %A referee: Kunz, Wolfgang %+ International Max Planck Research School, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T The Correctness of a Distributed Real-Time System : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B580-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %X In this thesis we review and extend the pervasive correctness proof for an asynchronous distributed real-time system published in [KP07a]. We take a two-step approach: first, we argue about a single electronic control unit (ECU) consisting of a processor (running the OSEK time-like operating system OLOS) and a FlexRay-like interface called automotive bus controller (ABC). We extend [KP07a] among others by a local OLOS model [Kna08] and go into details regarding the handling of interrupts and the treatment of devices. Second, we connect several ECUs via the ABCs and reason about the complete distributed system, see also [KP07b]. Note that the formalization of the scheduling correctness is reported in [ABK08b]. Through several abstraction layers we prove the correctness of the distributed system with respect to a new lock-step model COA that completely abstracts from the ABCs. By establishing the DISTR model [Kna08] it becomes possible to literally reuse the arguments from the first part of this thesis and therefore to simplify the analysis of the complete distributed system. To illustrate the applicability of DISTR, we have formally proven the top-level correctness theorem in the theorem prover Isabelle/HOL. Throughout the thesis we tie together theorems regarding: processor, ABC, compiler, micro kernel, operating system, and the worst case execution time analysis of applications and systems software. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2010/3212/
[174]
T. Langer, “On Generalized Barycentric Coordinates and Their Applications in Geometric Modeling,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
Generalized barycentric coordinate systems allow us to express the position of a point in space with respect to a given polygon or higher dimensional polytope. In such a system, a coordinate exists for each vertex of the polytope such that its vertices are represented by unit vectors $\vect{e}_i$ (where the coordinate associated with the respective vertex is 1, and all other coordinates are 0). Coordinates thus have a geometric meaning, which allows for the simplification of a number of tasks in geometry processing. Coordinate systems with respect to triangles have been around since the 19\textsuperscript{th} century, and have since been generalized; however, all of them have certain drawbacks, and are often restricted to special types of polytopes. We eliminate most of these restrictions and introduce a definition for 3D mean value coordinates that is valid for arbitrary polyhedra in $\realspace{3}$, with a straightforward generalization to higher dimensions. Furthermore, we extend the notion of barycentric coordinates in such a way as to allow Hermite interpolation and investigate the capabilities of generalized barycentric coordinates for constructing generalized B\'ezier surfaces. Finally, we show that barycentric coordinates can be used to obtain a novel formula for curvature computation on surfaces.
Export
BibTeX
@phdthesis{Langer08, TITLE = {On Generalized Barycentric Coordinates and Their Applications in Geometric Modeling}, AUTHOR = {Langer, Torsten}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-BFAF8554E927CBB8C125752400480229-Langer08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Generalized barycentric coordinate systems allow us to express the position of a point in space with respect to a given polygon or higher dimensional polytope. In such a system, a coordinate exists for each vertex of the polytope such that its vertices are represented by unit vectors $\vect{e}_i$ (where the coordinate associated with the respective vertex is 1, and all other coordinates are 0). Coordinates thus have a geometric meaning, which allows for the simplification of a number of tasks in geometry processing. Coordinate systems with respect to triangles have been around since the 19\textsuperscript{th} century, and have since been generalized; however, all of them have certain drawbacks, and are often restricted to special types of polytopes. We eliminate most of these restrictions and introduce a definition for 3D mean value coordinates that is valid for arbitrary polyhedra in $\realspace{3}$, with a straightforward generalization to higher dimensions. Furthermore, we extend the notion of barycentric coordinates in such a way as to allow Hermite interpolation and investigate the capabilities of generalized barycentric coordinates for constructing generalized B\'ezier surfaces. Finally, we show that barycentric coordinates can be used to obtain a novel formula for curvature computation on surfaces.}, }
Endnote
%0 Thesis %A Langer, Torsten %Y Weikert, %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T On Generalized Barycentric Coordinates and Their Applications in Geometric Modeling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A90-6 %F EDOC: 428319 %F OTHER: Local-ID: C125756E0038A185-BFAF8554E927CBB8C125752400480229-Langer08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %X Generalized barycentric coordinate systems allow us to express the position of a point in space with respect to a given polygon or higher dimensional polytope. In such a system, a coordinate exists for each vertex of the polytope such that its vertices are represented by unit vectors $\vect{e}_i$ (where the coordinate associated with the respective vertex is 1, and all other coordinates are 0). Coordinates thus have a geometric meaning, which allows for the simplification of a number of tasks in geometry processing. Coordinate systems with respect to triangles have been around since the 19\textsuperscript{th} century, and have since been generalized; however, all of them have certain drawbacks, and are often restricted to special types of polytopes. We eliminate most of these restrictions and introduce a definition for 3D mean value coordinates that is valid for arbitrary polyhedra in $\realspace{3}$, with a straightforward generalization to higher dimensions. Furthermore, we extend the notion of barycentric coordinates in such a way as to allow Hermite interpolation and investigate the capabilities of generalized barycentric coordinates for constructing generalized B\'ezier surfaces. Finally, we show that barycentric coordinates can be used to obtain a novel formula for curvature computation on surfaces.
[175]
S. Laue, “Approximation Algorithms for Geometric Optimization Problems,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
This thesis deals with a number of geometric optimization problems which are all NP-hard. The first problem we consider is the set cover problem for polytopes in R3. Here, we are given a set of points in R3 and a fixed set of translates of an arbitrary polytope. We would like to select a subset of the given polytopes such that each input point is covered by at least one polytope and the number of selected polytopes is minimal. By using epsilon-nets, we provide the first constant-factor approximation algorithm for this problem. The second set of problems that we consider are power assignment problems in wireless networks. Ad hoc wireless networks are a priori unstructured in a sense that they lack a predetermined interconnectivity. We consider a number of typical connectivity requirements and either give the first algorithms that compute a (1 + )-approximate energy efficient solution to them, or drastically improve upon existing algorithms in running time. The algorithms are based on coresets. We then extend the algorithms from the Euclidean case to metrics of bounded-doubling dimension and study metric spaces of bounded-doubling dimension more in-depth. The last problem that we consider is the k-hop minimum spanning tree, that is, we are given a graph and a specified root node and we would like to find a minimum spanning tree of the graph such that each root-leaf path contains at most k edges. We give the first PTAS for the problem in the Euclidean plane.
Export
BibTeX
@phdthesis{LauePhd2008, TITLE = {Approximation Algorithms for Geometric Optimization Problems}, AUTHOR = {Laue, S{\"o}ren}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {This thesis deals with a number of geometric optimization problems which are all NP-hard. The first problem we consider is the set cover problem for polytopes in R3. Here, we are given a set of points in R3 and a fixed set of translates of an arbitrary polytope. We would like to select a subset of the given polytopes such that each input point is covered by at least one polytope and the number of selected polytopes is minimal. By using epsilon-nets, we provide the first constant-factor approximation algorithm for this problem. The second set of problems that we consider are power assignment problems in wireless networks. Ad hoc wireless networks are a priori unstructured in a sense that they lack a predetermined interconnectivity. We consider a number of typical connectivity requirements and either give the first algorithms that compute a (1 + )-approximate energy efficient solution to them, or drastically improve upon existing algorithms in running time. The algorithms are based on coresets. We then extend the algorithms from the Euclidean case to metrics of bounded-doubling dimension and study metric spaces of bounded-doubling dimension more in-depth. The last problem that we consider is the k-hop minimum spanning tree, that is, we are given a graph and a specified root node and we would like to find a minimum spanning tree of the graph such that each root-leaf path contains at most k edges. We give the first PTAS for the problem in the Euclidean plane.}, }
Endnote
%0 Thesis %A Laue, S&#246;ren %Y Funke, Stefan %A referee: Mehlhorn, Kurt %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximation Algorithms for Geometric Optimization Problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B591-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %X This thesis deals with a number of geometric optimization problems which are all NP-hard. The first problem we consider is the set cover problem for polytopes in R3. Here, we are given a set of points in R3 and a fixed set of translates of an arbitrary polytope. We would like to select a subset of the given polytopes such that each input point is covered by at least one polytope and the number of selected polytopes is minimal. By using epsilon-nets, we provide the first constant-factor approximation algorithm for this problem. The second set of problems that we consider are power assignment problems in wireless networks. Ad hoc wireless networks are a priori unstructured in a sense that they lack a predetermined interconnectivity. We consider a number of typical connectivity requirements and either give the first algorithms that compute a (1 + )-approximate energy efficient solution to them, or drastically improve upon existing algorithms in running time. The algorithms are based on coresets. We then extend the algorithms from the Euclidean case to metrics of bounded-doubling dimension and study metric spaces of bounded-doubling dimension more in-depth. The last problem that we consider is the k-hop minimum spanning tree, that is, we are given a graph and a specified root node and we would like to find a minimum spanning tree of the graph such that each root-leaf path contains at most k edges. We give the first PTAS for the problem in the Euclidean plane.
[176]
J. Luxenburger, “Modeling and exploiting user search behavior for information retrieval,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{LuxenburgerPhD08, TITLE = {Modeling and exploiting user search behavior for information retrieval}, AUTHOR = {Luxenburger, Julia}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-C51EDF058056F35FC12575900023DFFA-LuxenburgerPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Luxenburger, Julia %Y Weikum, Gerhard %A referee: Klakow, Dietrich %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Modeling and exploiting user search behavior for information retrieval : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A8E-E %F EDOC: 428320 %F OTHER: Local-ID: C125756E0038A185-C51EDF058056F35FC12575900023DFFA-LuxenburgerPhD08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd
[177]
J. Maydt, “Analysis of recombination in Molecular Sequence Data,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@phdthesis{MaydtPhD08, TITLE = {Analysis of recombination in Molecular Sequence Data}, AUTHOR = {Maydt, Jochen}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-6F64DF58B58DF213C12575900028AFB3-MaydtPhD08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Maydt, Jochen %Y Lengauer, Thomas %A referee: Hein, Jotun %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Analysis of recombination in Molecular Sequence Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A96-9 %F EDOC: 428304 %F OTHER: Local-ID: C125756E0038A185-6F64DF58B58DF213C12575900028AFB3-MaydtPhD08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2011/4101/
[178]
R. Naujoks, “NP-hard Networking Problems : Exact and Approximate Algorithms,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
An important class of problems that occur in different fields of research as biology, linguistics or in the design of wireless communication networks deal with the problem of finding an interconnection of a given set of objects. In the first one, we mainly deal with the so called Steiner minimum tree problem in Hamming metric. The computation of such trees has turned out to be a key tool for the reconstruction of the ancestral relationships of species. We give a new exact algorithm that clearly outperforms the branch and bound based method of Hendy and Penny which was considered to be the fastest for the last $25$ years. Additionally, we propose an extended model that copes with the case in which the ancestral relationships are best described by a non-tree structure. In the last part, we deal with several problems occurring in the design of wireless ad-hoc networks: While minimizing the total power consumption of a wireless communication network one wants to establish a messaging structure such that certain communication tasks can be performed. For these problems we show how approximate solutions can be found.
Export
BibTeX
@phdthesis{NaujoksPhD, TITLE = {{NP}-hard Networking Problems : Exact and Approximate Algorithms}, AUTHOR = {Naujoks, Rouven}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-15B087343F0A744EC125755B00417FC9-NaujoksPhD}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {An important class of problems that occur in different fields of research as biology, linguistics or in the design of wireless communication networks deal with the problem of finding an interconnection of a given set of objects. In the first one, we mainly deal with the so called Steiner minimum tree problem in Hamming metric. The computation of such trees has turned out to be a key tool for the reconstruction of the ancestral relationships of species. We give a new exact algorithm that clearly outperforms the branch and bound based method of Hendy and Penny which was considered to be the fastest for the last $25$ years. Additionally, we propose an extended model that copes with the case in which the ancestral relationships are best described by a non-tree structure. In the last part, we deal with several problems occurring in the design of wireless ad-hoc networks: While minimizing the total power consumption of a wireless communication network one wants to establish a messaging structure such that certain communication tasks can be performed. For these problems we show how approximate solutions can be found.}, }
Endnote
%0 Thesis %A Naujoks, Rouven %Y Mehlhorn, Kurt %A referee: Althaus, Ernst %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T NP-hard Networking Problems : Exact and Approximate Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A84-2 %F EDOC: 428321 %F OTHER: Local-ID: C125756E0038A185-15B087343F0A744EC125755B00417FC9-NaujoksPhD %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V phd %9 phd %X An important class of problems that o