# PhD Thesis

2020
[1]
M. Fleury, “Formalization of logical calculi in Isabelle/HOL,” Universität des Saarlandes, Saarbrücken, 2020.
Abstract
I develop a formal framework for propositional satifisfiability with the conflict-driven clause learning (CDCL) procedure using the Isabelle/HOL proof assistant. The framework offers a convenient way to prove metatheorems and experiment with variants, including the Davis-Putnam-Logemann-Loveland procedure. The most noteworthy aspects of my work are the inclusion of rules for forget and restart and the refinement approach. I use the formalization to develop three extensions: First, an incremental solving extension of CDCL. Second, I verify an optimizing CDCL (OCDCL): Given a cost function on literals, OCDCL derives an optimal model with minimum cost. Finally, I work on model covering. Thanks to the CDCL framework I can reuse, these extensions are easier to develop. Through a chain of refinements, I connect the abstract CDCL calculus first to a more concrete calculus, then to a SAT solver expressed in a simple functional programming language, and finally to a SAT solver in an imperative language, with total correctness guarantees. The imperative version relies on the two-watched-literal data structure and other optimizations found in modern solvers. I used the Isabelle Refinement Framework to automate the most tedious refinement steps. After that, I extend this work with further optimizations like blocking literals and the use of machine words as long as possible, before switching to unbounded integers to keep completeness.
Export
BibTeX
@phdthesis{Fleuryphd2019, TITLE = {Formalization of logical calculi in Isabelle/{HOL}}, AUTHOR = {Fleury, Mathias}, LANGUAGE = {eng}, DOI = {10.22028/D291-30179}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2020}, DATE = {2020}, ABSTRACT = {I develop a formal framework for propositional satifisfiability with the conflict-driven clause learning (CDCL) procedure using the Isabelle/HOL proof assistant. The framework offers a convenient way to prove metatheorems and experiment with variants, including the Davis-Putnam-Logemann-Loveland procedure. The most noteworthy aspects of my work are the inclusion of rules for forget and restart and the refinement approach. I use the formalization to develop three extensions: First, an incremental solving extension of CDCL. Second, I verify an optimizing CDCL (OCDCL): Given a cost function on literals, OCDCL derives an optimal model with minimum cost. Finally, I work on model covering. Thanks to the CDCL framework I can reuse, these extensions are easier to develop. Through a chain of refinements, I connect the abstract CDCL calculus first to a more concrete calculus, then to a SAT solver expressed in a simple functional programming language, and finally to a SAT solver in an imperative language, with total correctness guarantees. The imperative version relies on the two-watched-literal data structure and other optimizations found in modern solvers. I used the Isabelle Refinement Framework to automate the most tedious refinement steps. After that, I extend this work with further optimizations like blocking literals and the use of machine words as long as possible, before switching to unbounded integers to keep completeness.}, }
Endnote
%0 Thesis %A Fleury, Mathias %Y Weidenbach, Christoph %A referee: Biere, Armin %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Formalization of logical calculi in Isabelle/HOL : %G eng %U http://hdl.handle.net/21.11116/0000-0005-AE07-0 %R 10.22028/D291-30179 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2020 %P 169 p. %V phd %9 phd %X I develop a formal framework for propositional satifisfiability with the conflict-driven clause learning (CDCL) procedure using the Isabelle/HOL proof assistant. The framework offers a convenient way to prove metatheorems and experiment with variants, including the Davis-Putnam-Logemann-Loveland procedure. The most noteworthy aspects of my work are the inclusion of rules for forget and restart and the refinement approach. I use the formalization to develop three extensions: First, an incremental solving extension of CDCL. Second, I verify an optimizing CDCL (OCDCL): Given a cost function on literals, OCDCL derives an optimal model with minimum cost. Finally, I work on model covering. Thanks to the CDCL framework I can reuse, these extensions are easier to develop. Through a chain of refinements, I connect the abstract CDCL calculus first to a more concrete calculus, then to a SAT solver expressed in a simple functional programming language, and finally to a SAT solver in an imperative language, with total correctness guarantees. The imperative version relies on the two-watched-literal data structure and other optimizations found in modern solvers. I used the Isabelle Refinement Framework to automate the most tedious refinement steps. After that, I extend this work with further optimizations like blocking literals and the use of machine words as long as possible, before switching to unbounded integers to keep completeness. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28722
[2]
Y. He, “Improved Methods and Analysis for Semantic Image Segmentation,” Universität des Saarlandes, Saarbrücken, 2020.
Abstract
Modern deep learning has enabled amazing developments of computer vision in recent years (Hinton and Salakhutdinov, 2006; Krizhevsky et al., 2012). As a fundamental task, semantic segmentation aims to predict class labels for each pixel of images, which empowers machines perception of the visual world. In spite of recent successes of fully convolutional networks (Long etal., 2015), several challenges remain to be addressed. In this thesis, we focus on this topic, under different kinds of input formats and various types of scenes. Speciﬁcally, our study contains two aspects: (1) Data-driven neural modules for improved performance. (2) Leverage of datasets w.r.t.training systems with higher performances and better data privacy guarantees. In the ﬁrst part of this thesis, we improve semantic segmentation by designing new modules which are compatible with existing architectures. First, we develop a spatio-temporal data-driven pooling, which brings additional information of data (i.e. superpixels) into neural networks, beneﬁting the training of neural networks as well as the inference on novel data. We investigate our approach in RGB-D videos for segmenting indoor scenes, where depth provides complementary cues to colors and our model performs particularly well. Second, we design learnable dilated convolutions, which are the extension of standard dilated convolutions, whose dilation factors (Yu and Koltun, 2016) need to be carefully determined by hand to obtain decent performance. We present a method to learn dilation factors together with ﬁlter weights of convolutions to avoid a complicated search of dilation factors. We explore extensive studies on challenging street scenes, across various baselines with different complexity as well as several datasets at varying image resolutions. In the second part, we investigate how to utilize expensive training data. First, we start from the generative modelling and study the network architectures and the learning pipeline for generating multiple examples. We aim to improve the diversity of generated examples but also to preserve the comparable quality of the examples. Second, we develop a generative model for synthesizing features of a network. With a mixture of real images and synthetic features, we are able to train a segmentation model with better generalization capability. Our approach is evaluated on different scene parsing tasks to demonstrate the effectiveness of the proposed method. Finally, we study membership inference on the semantic segmentation task. We propose the ﬁrst membership inference attack system against black-box semantic segmentation models, that tries to infer if a data pair is used as training data or not. From our observations, information on training data is indeed leaking. To mitigate the leakage, we leverage our synthetic features to perform prediction obfuscations, reducing the posterior distribution gaps between a training and a testing set. Consequently, our study provides not only an approach for detecting illegal use of data, but also the foundations for a safer use of semantic segmentation models.
Export
BibTeX
@phdthesis{HEphd2019, TITLE = {Improved Methods and Analysis for Semantic Image Segmentation}, AUTHOR = {He, Yang}, LANGUAGE = {eng}, DOI = {10.22028/D291-30218}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2020}, DATE = {2020}, ABSTRACT = {Modern deep learning has enabled amazing developments of computer vision in recent years (Hinton and Salakhutdinov, 2006; Krizhevsky et al., 2012). As a fundamental task, semantic segmentation aims to predict class labels for each pixel of images, which empowers machines perception of the visual world. In spite of recent successes of fully convolutional networks (Long etal., 2015), several challenges remain to be addressed. In this thesis, we focus on this topic, under different kinds of input formats and various types of scenes. Speci{fi}cally, our study contains two aspects: (1) Data-driven neural modules for improved performance. (2) Leverage of datasets w.r.t.training systems with higher performances and better data privacy guarantees. In the {fi}rst part of this thesis, we improve semantic segmentation by designing new modules which are compatible with existing architectures. First, we develop a spatio-temporal data-driven pooling, which brings additional information of data (i.e. superpixels) into neural networks, bene{fi}ting the training of neural networks as well as the inference on novel data. We investigate our approach in RGB-D videos for segmenting indoor scenes, where depth provides complementary cues to colors and our model performs particularly well. Second, we design learnable dilated convolutions, which are the extension of standard dilated convolutions, whose dilation factors (Yu and Koltun, 2016) need to be carefully determined by hand to obtain decent performance. We present a method to learn dilation factors together with {fi}lter weights of convolutions to avoid a complicated search of dilation factors. We explore extensive studies on challenging street scenes, across various baselines with different complexity as well as several datasets at varying image resolutions. In the second part, we investigate how to utilize expensive training data. First, we start from the generative modelling and study the network architectures and the learning pipeline for generating multiple examples. We aim to improve the diversity of generated examples but also to preserve the comparable quality of the examples. Second, we develop a generative model for synthesizing features of a network. With a mixture of real images and synthetic features, we are able to train a segmentation model with better generalization capability. Our approach is evaluated on different scene parsing tasks to demonstrate the effectiveness of the proposed method. Finally, we study membership inference on the semantic segmentation task. We propose the {fi}rst membership inference attack system against black-box semantic segmentation models, that tries to infer if a data pair is used as training data or not. From our observations, information on training data is indeed leaking. To mitigate the leakage, we leverage our synthetic features to perform prediction obfuscations, reducing the posterior distribution gaps between a training and a testing set. Consequently, our study provides not only an approach for detecting illegal use of data, but also the foundations for a safer use of semantic segmentation models.}, }
Endnote
%0 Thesis %A He, Yang %Y Fritz, Mario %A referee: Schiele, Bernt %A referee: Denzler, Joachim %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society External Organizations %T Improved Methods and Analysis for Semantic Image Segmentation : %G eng %U http://hdl.handle.net/21.11116/0000-0005-C0DD-9 %R 10.22028/D291-30218 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2020 %P 162 p. %V phd %9 phd %X Modern deep learning has enabled amazing developments of computer vision in recent years (Hinton and Salakhutdinov, 2006; Krizhevsky et al., 2012). As a fundamental task, semantic segmentation aims to predict class labels for each pixel of images, which empowers machines perception of the visual world. In spite of recent successes of fully convolutional networks (Long etal., 2015), several challenges remain to be addressed. In this thesis, we focus on this topic, under different kinds of input formats and various types of scenes. Speci&#64257;cally, our study contains two aspects: (1) Data-driven neural modules for improved performance. (2) Leverage of datasets w.r.t.training systems with higher performances and better data privacy guarantees. In the &#64257;rst part of this thesis, we improve semantic segmentation by designing new modules which are compatible with existing architectures. First, we develop a spatio-temporal data-driven pooling, which brings additional information of data (i.e. superpixels) into neural networks, bene&#64257;ting the training of neural networks as well as the inference on novel data. We investigate our approach in RGB-D videos for segmenting indoor scenes, where depth provides complementary cues to colors and our model performs particularly well. Second, we design learnable dilated convolutions, which are the extension of standard dilated convolutions, whose dilation factors (Yu and Koltun, 2016) need to be carefully determined by hand to obtain decent performance. We present a method to learn dilation factors together with &#64257;lter weights of convolutions to avoid a complicated search of dilation factors. We explore extensive studies on challenging street scenes, across various baselines with different complexity as well as several datasets at varying image resolutions. In the second part, we investigate how to utilize expensive training data. First, we start from the generative modelling and study the network architectures and the learning pipeline for generating multiple examples. We aim to improve the diversity of generated examples but also to preserve the comparable quality of the examples. Second, we develop a generative model for synthesizing features of a network. With a mixture of real images and synthetic features, we are able to train a segmentation model with better generalization capability. Our approach is evaluated on different scene parsing tasks to demonstrate the effectiveness of the proposed method. Finally, we study membership inference on the semantic segmentation task. We propose the &#64257;rst membership inference attack system against black-box semantic segmentation models, that tries to infer if a data pair is used as training data or not. From our observations, information on training data is indeed leaking. To mitigate the leakage, we leverage our synthetic features to perform prediction obfuscations, reducing the posterior distribution gaps between a training and a testing set. Consequently, our study provides not only an approach for detecting illegal use of data, but also the foundations for a safer use of semantic segmentation models. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28784
2019
[3]
A. Abujabal, “Question Answering over Knowledge Bases with Continuous Learning,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Answering complex natural language questions with crisp answers is crucial towards satisfying the information needs of advanced users. With the rapid growth of knowledge bases (KBs) such as Yago and Freebase, this goal has become attainable by translating questions into formal queries like SPARQL queries. Such queries can then be evaluated over knowledge bases to retrieve crisp answers. To this end, three research issues arise: (i) how to develop methods that are robust to lexical and syntactic variations in questions and can handle complex questions, (ii) how to design and curate datasets to advance research in question answering, and (iii) how to efficiently identify named entities in questions. In this dissertation, we make the following five contributions in the areas of question answering (QA) and named entity recognition (NER). For issue (i), we make the following contributions: We present QUINT, an approach for answering natural language questions over knowledge bases using automatically learned templates. Templates are an important asset for QA over KBs, simplifying the semantic parsing of input questions and generating formal queries for interpretable answers. QUINT is capable of answering both simple and compositional questions. We introduce NEQA, a framework for continuous learning for QA over KBs. NEQA starts with a small seed of training examples in the form of question-answer pairs, and improves its performance over time. NEQA combines both syntax, through template-based answering, and semantics, via a semantic similarity function. %when templates fail to do so. Moreover, it adapts to the language used after deployment by periodically retraining its underlying models. For issues (i) and (ii), we present TEQUILA, a framework for answering complex questions with explicit and implicit temporal conditions over KBs. TEQUILA is built on a rule-based framework that detects and decomposes temporal questions into simpler sub-questions that can be answered by standard KB-QA systems. TEQUILA reconciles the results of sub-questions into final answers. TEQUILA is accompanied with a dataset called TempQuestions, which consists of 1,271 temporal questions with gold-standard answers over Freebase. This collection is derived by judiciously selecting time-related questions from existing QA datasets. For issue (ii), we publish ComQA, a large-scale manually-curated dataset for QA. ComQA contains questions that represent real information needs and exhibit a wide range of difficulties such as the need for temporal reasoning, comparison, and compositionality. ComQA contains paraphrase clusters of semantically-equivalent questions that can be exploited by QA systems. We harness a combination of community question-answering platforms and crowdsourcing to construct the ComQA dataset. For issue (iii), we introduce a neural network model based on subword units for named entity recognition. The model learns word representations using a combination of characters, bytes and phonemes. While achieving comparable performance with word-level based models, our model has an order-of-magnitude smaller vocabulary size and lower memory requirements, and it handles out-of-vocabulary words.
Export
BibTeX
@phdthesis{Abujabalphd2013, TITLE = {Question Answering over Knowledge Bases with Continuous Learning}, AUTHOR = {Abujabal, Abdalghani}, LANGUAGE = {eng}, DOI = {10.22028/D291-27968}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Answering complex natural language questions with crisp answers is crucial towards satisfying the information needs of advanced users. With the rapid growth of knowledge bases (KBs) such as Yago and Freebase, this goal has become attainable by translating questions into formal queries like SPARQL queries. Such queries can then be evaluated over knowledge bases to retrieve crisp answers. To this end, three research issues arise: (i) how to develop methods that are robust to lexical and syntactic variations in questions and can handle complex questions, (ii) how to design and curate datasets to advance research in question answering, and (iii) how to efficiently identify named entities in questions. In this dissertation, we make the following five contributions in the areas of question answering (QA) and named entity recognition (NER). For issue (i), we make the following contributions: We present QUINT, an approach for answering natural language questions over knowledge bases using automatically learned templates. Templates are an important asset for QA over KBs, simplifying the semantic parsing of input questions and generating formal queries for interpretable answers. QUINT is capable of answering both simple and compositional questions. We introduce NEQA, a framework for continuous learning for QA over KBs. NEQA starts with a small seed of training examples in the form of question-answer pairs, and improves its performance over time. NEQA combines both syntax, through template-based answering, and semantics, via a semantic similarity function. %when templates fail to do so. Moreover, it adapts to the language used after deployment by periodically retraining its underlying models. For issues (i) and (ii), we present TEQUILA, a framework for answering complex questions with explicit and implicit temporal conditions over KBs. TEQUILA is built on a rule-based framework that detects and decomposes temporal questions into simpler sub-questions that can be answered by standard KB-QA systems. TEQUILA reconciles the results of sub-questions into final answers. TEQUILA is accompanied with a dataset called TempQuestions, which consists of 1,271 temporal questions with gold-standard answers over Freebase. This collection is derived by judiciously selecting time-related questions from existing QA datasets. For issue (ii), we publish ComQA, a large-scale manually-curated dataset for QA. ComQA contains questions that represent real information needs and exhibit a wide range of difficulties such as the need for temporal reasoning, comparison, and compositionality. ComQA contains paraphrase clusters of semantically-equivalent questions that can be exploited by QA systems. We harness a combination of community question-answering platforms and crowdsourcing to construct the ComQA dataset. For issue (iii), we introduce a neural network model based on subword units for named entity recognition. The model learns word representations using a combination of characters, bytes and phonemes. While achieving comparable performance with word-level based models, our model has an order-of-magnitude smaller vocabulary size and lower memory requirements, and it handles out-of-vocabulary words.}, }
Endnote
%0 Thesis %A Abujabal, Abdalghani %Y Weikum, Gerhard %A referee: Linn, Jimmy %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Question Answering over Knowledge Bases with Continuous Learning : %G eng %U http://hdl.handle.net/21.11116/0000-0003-AEC0-0 %R 10.22028/D291-27968 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 141 p. %V phd %9 phd %X Answering complex natural language questions with crisp answers is crucial towards satisfying the information needs of advanced users. With the rapid growth of knowledge bases (KBs) such as Yago and Freebase, this goal has become attainable by translating questions into formal queries like SPARQL queries. Such queries can then be evaluated over knowledge bases to retrieve crisp answers. To this end, three research issues arise: (i) how to develop methods that are robust to lexical and syntactic variations in questions and can handle complex questions, (ii) how to design and curate datasets to advance research in question answering, and (iii) how to efficiently identify named entities in questions. In this dissertation, we make the following five contributions in the areas of question answering (QA) and named entity recognition (NER). For issue (i), we make the following contributions: We present QUINT, an approach for answering natural language questions over knowledge bases using automatically learned templates. Templates are an important asset for QA over KBs, simplifying the semantic parsing of input questions and generating formal queries for interpretable answers. QUINT is capable of answering both simple and compositional questions. We introduce NEQA, a framework for continuous learning for QA over KBs. NEQA starts with a small seed of training examples in the form of question-answer pairs, and improves its performance over time. NEQA combines both syntax, through template-based answering, and semantics, via a semantic similarity function. %when templates fail to do so. Moreover, it adapts to the language used after deployment by periodically retraining its underlying models. For issues (i) and (ii), we present TEQUILA, a framework for answering complex questions with explicit and implicit temporal conditions over KBs. TEQUILA is built on a rule-based framework that detects and decomposes temporal questions into simpler sub-questions that can be answered by standard KB-QA systems. TEQUILA reconciles the results of sub-questions into final answers. TEQUILA is accompanied with a dataset called TempQuestions, which consists of 1,271 temporal questions with gold-standard answers over Freebase. This collection is derived by judiciously selecting time-related questions from existing QA datasets. For issue (ii), we publish ComQA, a large-scale manually-curated dataset for QA. ComQA contains questions that represent real information needs and exhibit a wide range of difficulties such as the need for temporal reasoning, comparison, and compositionality. ComQA contains paraphrase clusters of semantically-equivalent questions that can be exploited by QA systems. We harness a combination of community question-answering platforms and crowdsourcing to construct the ComQA dataset. For issue (iii), we introduce a neural network model based on subword units for named entity recognition. The model learns word representations using a combination of characters, bytes and phonemes. While achieving comparable performance with word-level based models, our model has an order-of-magnitude smaller vocabulary size and lower memory requirements, and it handles out-of-vocabulary words. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27438
[4]
J. A. Biega, “Enhancing Privacy and Fairness in Search Systems,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Following a period of expedited progress in the capabilities of digital systems, the society begins to realize that systems designed to assist people in various tasks can also harm individuals and society. Mediating access to information and explicitly or implicitly ranking people in increasingly many applications, search systems have a substantial potential to contribute to such unwanted outcomes. Since they collect vast amounts of data about both searchers and search subjects, they have the potential to violate the privacy of both of these groups of users. Moreover, in applications where rankings influence people's economic livelihood outside of the platform, such as sharing economy or hiring support websites, search engines have an immense economic power over their users in that they control user exposure in ranked results. This thesis develops new models and methods broadly covering different aspects of privacy and fairness in search systems for both searchers and search subjects. Specifically, it makes the following contributions: (1) We propose a model for computing individually fair rankings where search subjects get exposure proportional to their relevance. The exposure is amortized over time using constrained optimization to overcome searcher attention biases while preserving ranking utility. (2) We propose a model for computing sensitive search exposure where each subject gets to know the sensitive queries that lead to her profile in the top-k search results. The problem of finding exposing queries is technically modeled as reverse nearest neighbor search, followed by a weekly-supervised learning to rank model ordering the queries by privacy-sensitivity. (3) We propose a model for quantifying privacy risks from textual data in online communities. The method builds on a topic model where each topic is annotated by a crowdsourced sensitivity score, and privacy risks are associated with a user's relevance to sensitive topics. We propose relevance measures capturing different dimensions of user interest in a topic and show how they correlate with human risk perceptions. (4) We propose a model for privacy-preserving personalized search where search queries of different users are split and merged into synthetic profiles. The model mediates the privacy-utility trade-off by keeping semantically coherent fragments of search histories within individual profiles, while trying to minimize the similarity of any of the synthetic profiles to the original user profiles. The models are evaluated using information retrieval techniques and user studies over a variety of datasets, ranging from query logs, through social media and community question answering postings, to item listings from sharing economy platforms.
Export
BibTeX
@phdthesis{biegaphd2019, TITLE = {Enhancing Privacy and Fairness in Search Systems}, AUTHOR = {Biega, Joanna Asia}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-278861}, DOI = {10.22028/D291-27886}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Following a period of expedited progress in the capabilities of digital systems, the society begins to realize that systems designed to assist people in various tasks can also harm individuals and society. Mediating access to information and explicitly or implicitly ranking people in increasingly many applications, search systems have a substantial potential to contribute to such unwanted outcomes. Since they collect vast amounts of data about both searchers and search subjects, they have the potential to violate the privacy of both of these groups of users. Moreover, in applications where rankings influence people's economic livelihood outside of the platform, such as sharing economy or hiring support websites, search engines have an immense economic power over their users in that they control user exposure in ranked results. This thesis develops new models and methods broadly covering different aspects of privacy and fairness in search systems for both searchers and search subjects. Specifically, it makes the following contributions: (1) We propose a model for computing individually fair rankings where search subjects get exposure proportional to their relevance. The exposure is amortized over time using constrained optimization to overcome searcher attention biases while preserving ranking utility. (2) We propose a model for computing sensitive search exposure where each subject gets to know the sensitive queries that lead to her profile in the top-k search results. The problem of finding exposing queries is technically modeled as reverse nearest neighbor search, followed by a weekly-supervised learning to rank model ordering the queries by privacy-sensitivity. (3) We propose a model for quantifying privacy risks from textual data in online communities. The method builds on a topic model where each topic is annotated by a crowdsourced sensitivity score, and privacy risks are associated with a user's relevance to sensitive topics. We propose relevance measures capturing different dimensions of user interest in a topic and show how they correlate with human risk perceptions. (4) We propose a model for privacy-preserving personalized search where search queries of different users are split and merged into synthetic profiles. The model mediates the privacy-utility trade-off by keeping semantically coherent fragments of search histories within individual profiles, while trying to minimize the similarity of any of the synthetic profiles to the original user profiles. The models are evaluated using information retrieval techniques and user studies over a variety of datasets, ranging from query logs, through social media and community question answering postings, to item listings from sharing economy platforms.}, }
Endnote
%0 Thesis %A Biega, Joanna Asia %Y Weikum, Gerhard %A referee: Gummadi, Krishna %A referee: Nejdl, Wolfgang %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society External Organizations %T Enhancing Privacy and Fairness in Search Systems : %G eng %U http://hdl.handle.net/21.11116/0000-0003-9AED-5 %R 10.22028/D291-27886 %U urn:nbn:de:bsz:291--ds-278861 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 111 p. %V phd %9 phd %X Following a period of expedited progress in the capabilities of digital systems, the society begins to realize that systems designed to assist people in various tasks can also harm individuals and society. Mediating access to information and explicitly or implicitly ranking people in increasingly many applications, search systems have a substantial potential to contribute to such unwanted outcomes. Since they collect vast amounts of data about both searchers and search subjects, they have the potential to violate the privacy of both of these groups of users. Moreover, in applications where rankings influence people's economic livelihood outside of the platform, such as sharing economy or hiring support websites, search engines have an immense economic power over their users in that they control user exposure in ranked results. This thesis develops new models and methods broadly covering different aspects of privacy and fairness in search systems for both searchers and search subjects. Specifically, it makes the following contributions: (1) We propose a model for computing individually fair rankings where search subjects get exposure proportional to their relevance. The exposure is amortized over time using constrained optimization to overcome searcher attention biases while preserving ranking utility. (2) We propose a model for computing sensitive search exposure where each subject gets to know the sensitive queries that lead to her profile in the top-k search results. The problem of finding exposing queries is technically modeled as reverse nearest neighbor search, followed by a weekly-supervised learning to rank model ordering the queries by privacy-sensitivity. (3) We propose a model for quantifying privacy risks from textual data in online communities. The method builds on a topic model where each topic is annotated by a crowdsourced sensitivity score, and privacy risks are associated with a user's relevance to sensitive topics. We propose relevance measures capturing different dimensions of user interest in a topic and show how they correlate with human risk perceptions. (4) We propose a model for privacy-preserving personalized search where search queries of different users are split and merged into synthetic profiles. The model mediates the privacy-utility trade-off by keeping semantically coherent fragments of search histories within individual profiles, while trying to minimize the similarity of any of the synthetic profiles to the original user profiles. The models are evaluated using information retrieval techniques and user studies over a variety of datasets, ranging from query logs, through social media and community question answering postings, to item listings from sharing economy platforms. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27389
[5]
A. Dheghani Amirabad, “From genes to transcripts : integrative modeling and analysis of regulatory networks,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Although all the cells in an organism posses the same genome, the regulatory mechanisms lead to highly specific cell types. Elucidating these regulatory mechanisms is a great challenge in systems biology research. Nonetheless, it is known that a large fraction of our genome is comprised of regulatory elements, the precise mechanisms by which different combinations of regulatory elements are involved in controlling gene expression and cell identity are poorly understood. This thesis describes algorithms and approaches for modeling and analysis of different modes of gene regulation. We present POSTIT a novel algorithm for modeling and inferring transcript isoform regulation from transcriptomics and epigenomics data. POSTIT uses multi-task learning with structured-sparsity inducing regularizer to share the regulatory information between isoforms of a gene, which is shown to lead to accurate isoform expression prediction and inference of regulators. Furthermore, it can use isoform expression level and annotation as informative priors for gene expression prediction. Hence, it constitute a novel accurate approach applicable to gene or transcript isoform centric analysis using expression data. In an application to microRNA (miRNA) target prioritization, we demonstrate that it out-competes classical gene centric methods. Moreover, pinpoints important transcription factors and miRNAs that regulate differentially expressed isoforms in any biological system. Competing endogenous RNA (ceRNA) interactions mediated by miRNAs were postulated as an important cellular regulatory network, in which cross-talk between different transcripts involves competition for joint regulators. We developed a novel statistical method, called SPONGE, for large-scale inference of ceRNA networks. In this framework, we designed an efficient empirical p-value computation approach, by sampling from derived null models, which addresses important confounding factors such as sample size, number of involved regulators and strength of correlation. In an application to a large pan-cancer dataset with 31 cancers we discovered protein-coding and non-coding RNAs that are generic ceRNAs in cancer. Finally, we present an integrative analysis of miRNA and protein-based posttranscriptional regulation. We postulate a competitive regulation of the RNAbinding protein IMP2 with miRNAs binding the same RNAs using expression and RNA binding data. This function of IMP2 is relevant in the contribution to disease in the context of adult cellular metabolism. As a summary, in this thesis we have presented a number of different novel approaches for inference and the integrative analysis of regulatory networks that we believe will find wide applicability in the biological sciences.
Export
BibTeX
@phdthesis{Dehghaniphd2019, TITLE = {From genes to transcripts : integrative modeling and analysis of regulatory networks}, AUTHOR = {Dheghani Amirabad, Azim}, LANGUAGE = {eng}, DOI = {10.22028/D291-28659}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Although all the cells in an organism posses the same genome, the regulatory mechanisms lead to highly specific cell types. Elucidating these regulatory mechanisms is a great challenge in systems biology research. Nonetheless, it is known that a large fraction of our genome is comprised of regulatory elements, the precise mechanisms by which different combinations of regulatory elements are involved in controlling gene expression and cell identity are poorly understood. This thesis describes algorithms and approaches for modeling and analysis of different modes of gene regulation. We present POSTIT a novel algorithm for modeling and inferring transcript isoform regulation from transcriptomics and epigenomics data. POSTIT uses multi-task learning with structured-sparsity inducing regularizer to share the regulatory information between isoforms of a gene, which is shown to lead to accurate isoform expression prediction and inference of regulators. Furthermore, it can use isoform expression level and annotation as informative priors for gene expression prediction. Hence, it constitute a novel accurate approach applicable to gene or transcript isoform centric analysis using expression data. In an application to microRNA (miRNA) target prioritization, we demonstrate that it out-competes classical gene centric methods. Moreover, pinpoints important transcription factors and miRNAs that regulate differentially expressed isoforms in any biological system. Competing endogenous RNA (ceRNA) interactions mediated by miRNAs were postulated as an important cellular regulatory network, in which cross-talk between different transcripts involves competition for joint regulators. We developed a novel statistical method, called SPONGE, for large-scale inference of ceRNA networks. In this framework, we designed an efficient empirical p-value computation approach, by sampling from derived null models, which addresses important confounding factors such as sample size, number of involved regulators and strength of correlation. In an application to a large pan-cancer dataset with 31 cancers we discovered protein-coding and non-coding RNAs that are generic ceRNAs in cancer. Finally, we present an integrative analysis of miRNA and protein-based posttranscriptional regulation. We postulate a competitive regulation of the RNAbinding protein IMP2 with miRNAs binding the same RNAs using expression and RNA binding data. This function of IMP2 is relevant in the contribution to disease in the context of adult cellular metabolism. As a summary, in this thesis we have presented a number of different novel approaches for inference and the integrative analysis of regulatory networks that we believe will find wide applicability in the biological sciences.}, }
Endnote
%0 Thesis %A Dheghani Amirabad, Azim %Y Schulz, Marcel %A referee: Keller, Andreas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T From genes to transcripts : integrative modeling and analysis of regulatory networks : %G eng %U http://hdl.handle.net/21.11116/0000-0005-438D-1 %R 10.22028/D291-28659 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 139 p. %V phd %9 phd %X Although all the cells in an organism posses the same genome, the regulatory mechanisms lead to highly specific cell types. Elucidating these regulatory mechanisms is a great challenge in systems biology research. Nonetheless, it is known that a large fraction of our genome is comprised of regulatory elements, the precise mechanisms by which different combinations of regulatory elements are involved in controlling gene expression and cell identity are poorly understood. This thesis describes algorithms and approaches for modeling and analysis of different modes of gene regulation. We present POSTIT a novel algorithm for modeling and inferring transcript isoform regulation from transcriptomics and epigenomics data. POSTIT uses multi-task learning with structured-sparsity inducing regularizer to share the regulatory information between isoforms of a gene, which is shown to lead to accurate isoform expression prediction and inference of regulators. Furthermore, it can use isoform expression level and annotation as informative priors for gene expression prediction. Hence, it constitute a novel accurate approach applicable to gene or transcript isoform centric analysis using expression data. In an application to microRNA (miRNA) target prioritization, we demonstrate that it out-competes classical gene centric methods. Moreover, pinpoints important transcription factors and miRNAs that regulate differentially expressed isoforms in any biological system. Competing endogenous RNA (ceRNA) interactions mediated by miRNAs were postulated as an important cellular regulatory network, in which cross-talk between different transcripts involves competition for joint regulators. We developed a novel statistical method, called SPONGE, for large-scale inference of ceRNA networks. In this framework, we designed an efficient empirical p-value computation approach, by sampling from derived null models, which addresses important confounding factors such as sample size, number of involved regulators and strength of correlation. In an application to a large pan-cancer dataset with 31 cancers we discovered protein-coding and non-coding RNAs that are generic ceRNAs in cancer. Finally, we present an integrative analysis of miRNA and protein-based posttranscriptional regulation. We postulate a competitive regulation of the RNAbinding protein IMP2 with miRNAs binding the same RNAs using expression and RNA binding data. This function of IMP2 is relevant in the contribution to disease in the context of adult cellular metabolism. As a summary, in this thesis we have presented a number of different novel approaches for inference and the integrative analysis of regulatory networks that we believe will find wide applicability in the biological sciences. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27669
[6]
M. Döring, “Computational Approaches for Improving Treatment and Prevention of Viral Infections,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
The treatment of infections with HIV or HCV is challenging. Thus, novel drugs and new computational approaches that support the selection of therapies are required. This work presents methods that support therapy selection as well as methods that advance novel antiviral treatments. geno2pheno[ngs-freq] identifies drug resistance from HIV-1 or HCV samples that were subjected to next-generation sequencing by interpreting their sequences either via support vector machines or a rules-based approach. geno2pheno[coreceptor-hiv2] determines the coreceptor that is used for viral cell entry by analyzing a segment of the HIV-2 surface protein with a support vector machine. openPrimeR is capable of finding optimal combinations of primers for multiplex polymerase chain reaction by solving a set cover problem and accessing a new logistic regression model for determining amplification events arising from polymerase chain reaction. geno2pheno[ngs-freq] and geno2pheno[coreceptorhiv2] enable the personalization of antiviral treatments and support clinical decision making. The application of openPrimeR on human immunoglobulin sequences has resulted in novel primer sets that improve the isolation of broadly neutralizing antibodies against HIV-1. The methods that were developed in this work thus constitute important contributions towards improving the prevention and treatment of viral infectious diseases.
Export
BibTeX
@phdthesis{Doringphd2013, TITLE = {Computational Approaches for Improving Treatment and Prevention of Viral Infections}, AUTHOR = {D{\"o}ring, Matthias}, LANGUAGE = {eng}, DOI = {10.22028/D291-27946}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {The treatment of infections with HIV or HCV is challenging. Thus, novel drugs and new computational approaches that support the selection of therapies are required. This work presents methods that support therapy selection as well as methods that advance novel antiviral treatments. geno2pheno[ngs-freq] identifies drug resistance from HIV-1 or HCV samples that were subjected to next-generation sequencing by interpreting their sequences either via support vector machines or a rules-based approach. geno2pheno[coreceptor-hiv2] determines the coreceptor that is used for viral cell entry by analyzing a segment of the HIV-2 surface protein with a support vector machine. openPrimeR is capable of finding optimal combinations of primers for multiplex polymerase chain reaction by solving a set cover problem and accessing a new logistic regression model for determining amplification events arising from polymerase chain reaction. geno2pheno[ngs-freq] and geno2pheno[coreceptorhiv2] enable the personalization of antiviral treatments and support clinical decision making. The application of openPrimeR on human immunoglobulin sequences has resulted in novel primer sets that improve the isolation of broadly neutralizing antibodies against HIV-1. The methods that were developed in this work thus constitute important contributions towards improving the prevention and treatment of viral infectious diseases.}, }
Endnote
%0 Thesis %A D&#246;ring, Matthias %Y Pfeifer, Nico %A referee: Lengauer, Thomas %A referee: Kalinina, Olga V. %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Computational Approaches for Improving Treatment and Prevention of Viral Infections : %G eng %U http://hdl.handle.net/21.11116/0000-0003-AEBA-8 %R 10.22028/D291-27946 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 337 p. %V phd %9 phd %X The treatment of infections with HIV or HCV is challenging. Thus, novel drugs and new computational approaches that support the selection of therapies are required. This work presents methods that support therapy selection as well as methods that advance novel antiviral treatments. geno2pheno[ngs-freq] identifies drug resistance from HIV-1 or HCV samples that were subjected to next-generation sequencing by interpreting their sequences either via support vector machines or a rules-based approach. geno2pheno[coreceptor-hiv2] determines the coreceptor that is used for viral cell entry by analyzing a segment of the HIV-2 surface protein with a support vector machine. openPrimeR is capable of finding optimal combinations of primers for multiplex polymerase chain reaction by solving a set cover problem and accessing a new logistic regression model for determining amplification events arising from polymerase chain reaction. geno2pheno[ngs-freq] and geno2pheno[coreceptorhiv2] enable the personalization of antiviral treatments and support clinical decision making. The application of openPrimeR on human immunoglobulin sequences has resulted in novel primer sets that improve the isolation of broadly neutralizing antibodies against HIV-1. The methods that were developed in this work thus constitute important contributions towards improving the prevention and treatment of viral infectious diseases. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27443
[7]
P. Ebert, “What we leave behind : reproducibility in chromatin analysis within and across species,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Epigenetics is the field of biology that investigates heritable factors regulating gene expression without being directly encoded in the genome of an organism. The human genome is densely packed inside a cell's nucleus in the form of chromatin. Certain constituents of chromatin play a vital role as epigenetic factors in the dynamic regulation of gene expression. Epigenetic changes on the chromatin level are thus an integral part of the mechanisms governing the development of the functionally diverse cell types in multicellular species such as human. Studying these mechanisms is not only important to understand the biology of healthy cells, but also necessary to comprehend the epigenetic component in the formation of many complex diseases. Modern wet lab technology enables scientists to probe the epigenome with high throughput and in extensive detail. The fast generation of epigenetic datasets burdens computational researchers with the challenge of rapidly performing elaborate analyses without compromising on the scientific reproducibility of the reported findings. To facilitate reproducible computational research in epigenomics, this thesis proposes a task-oriented metadata model, relying on web technology and supported by database engineering, that aims at consistent and human-readable documentation of standardized computational workflows. The suggested approach features, e.g., computational validation of metadata records, automatic error detection, and progress monitoring of multi-step analyses, and was successfully field-tested as part of a large epigenome research consortium. This work leaves aside theoretical considerations, and intentionally emphasizes the realistic need of providing scientists with tools that assist them in performing reproducible research. Irrespective of the technological progress, the dynamic and cell-type specific nature of the epigenome commonly requires restricting the number of analyzed samples due to resource limitations. The second project of this thesis introduces the software tool SCIDDO, which has been developed for the differential chromatin analysis of cellular samples with potentially limited availability. By combining statistics, algorithmics, and best practices for robust software development, SCIDDO can quickly identify biologically meaningful regions of differential chromatin marking between cell types. We demonstrate SCIDDO's usefulness in an exemplary study in which we identify regions that establish a link between chromatin and gene expression changes. SCIDDO's quantitative approach to differential chromatin analysis is user-customizable, providing the necessary flexibility to adapt SCIDDO to specific research tasks. Given the functional diversity of cell types and the dynamics of the epigenome in response to environmental changes, it is hardly realistic to map the complete epigenome even for a single organism like human or mouse. For non-model organisms, e.g., cow, pig, or dog, epigenome data is particularly scarce. The third project of this thesis investigates to what extent bioinformatics methods can compensate for the comparatively little effort that is invested in charting the epigenome of non-model species. This study implements a large integrative analysis pipeline, including state-of-the-art machine learning, to transfer chromatin data for predictive modeling between 13 species. The evidence presented here indicates that a partial regulatory epigenetic signal is stably retained even over millions of years of evolutionary distance between the considered species. This finding suggests complementary and cost-effective ways for bioinformatics to contribute to comparative epigenome analysis across species boundaries.
Export
BibTeX
@phdthesis{Ebertphd2019, TITLE = {What we leave behind : reproducibility in chromatin analysis within and across species}, AUTHOR = {Ebert, Peter}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-278311}, DOI = {doi.org/10.22028/D291-27831}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Epigenetics is the field of biology that investigates heritable factors regulating gene expression without being directly encoded in the genome of an organism. The human genome is densely packed inside a cell's nucleus in the form of chromatin. Certain constituents of chromatin play a vital role as epigenetic factors in the dynamic regulation of gene expression. Epigenetic changes on the chromatin level are thus an integral part of the mechanisms governing the development of the functionally diverse cell types in multicellular species such as human. Studying these mechanisms is not only important to understand the biology of healthy cells, but also necessary to comprehend the epigenetic component in the formation of many complex diseases. Modern wet lab technology enables scientists to probe the epigenome with high throughput and in extensive detail. The fast generation of epigenetic datasets burdens computational researchers with the challenge of rapidly performing elaborate analyses without compromising on the scientific reproducibility of the reported findings. To facilitate reproducible computational research in epigenomics, this thesis proposes a task-oriented metadata model, relying on web technology and supported by database engineering, that aims at consistent and human-readable documentation of standardized computational workflows. The suggested approach features, e.g., computational validation of metadata records, automatic error detection, and progress monitoring of multi-step analyses, and was successfully field-tested as part of a large epigenome research consortium. This work leaves aside theoretical considerations, and intentionally emphasizes the realistic need of providing scientists with tools that assist them in performing reproducible research. Irrespective of the technological progress, the dynamic and cell-type specific nature of the epigenome commonly requires restricting the number of analyzed samples due to resource limitations. The second project of this thesis introduces the software tool SCIDDO, which has been developed for the differential chromatin analysis of cellular samples with potentially limited availability. By combining statistics, algorithmics, and best practices for robust software development, SCIDDO can quickly identify biologically meaningful regions of differential chromatin marking between cell types. We demonstrate SCIDDO's usefulness in an exemplary study in which we identify regions that establish a link between chromatin and gene expression changes. SCIDDO's quantitative approach to differential chromatin analysis is user-customizable, providing the necessary flexibility to adapt SCIDDO to specific research tasks. Given the functional diversity of cell types and the dynamics of the epigenome in response to environmental changes, it is hardly realistic to map the complete epigenome even for a single organism like human or mouse. For non-model organisms, e.g., cow, pig, or dog, epigenome data is particularly scarce. The third project of this thesis investigates to what extent bioinformatics methods can compensate for the comparatively little effort that is invested in charting the epigenome of non-model species. This study implements a large integrative analysis pipeline, including state-of-the-art machine learning, to transfer chromatin data for predictive modeling between 13 species. The evidence presented here indicates that a partial regulatory epigenetic signal is stably retained even over millions of years of evolutionary distance between the considered species. This finding suggests complementary and cost-effective ways for bioinformatics to contribute to comparative epigenome analysis across species boundaries.}, }
Endnote
%0 Thesis %A Ebert, Peter %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %A referee: Weikum, Gerhard %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T What we leave behind : reproducibility in chromatin analysis within and across species : %G eng %U http://hdl.handle.net/21.11116/0000-0003-9ADF-5 %R doi.org/10.22028/D291-27831 %U urn:nbn:de:bsz:291--ds-278311 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 152 p. %V phd %9 phd %X Epigenetics is the field of biology that investigates heritable factors regulating gene expression without being directly encoded in the genome of an organism. The human genome is densely packed inside a cell's nucleus in the form of chromatin. Certain constituents of chromatin play a vital role as epigenetic factors in the dynamic regulation of gene expression. Epigenetic changes on the chromatin level are thus an integral part of the mechanisms governing the development of the functionally diverse cell types in multicellular species such as human. Studying these mechanisms is not only important to understand the biology of healthy cells, but also necessary to comprehend the epigenetic component in the formation of many complex diseases. Modern wet lab technology enables scientists to probe the epigenome with high throughput and in extensive detail. The fast generation of epigenetic datasets burdens computational researchers with the challenge of rapidly performing elaborate analyses without compromising on the scientific reproducibility of the reported findings. To facilitate reproducible computational research in epigenomics, this thesis proposes a task-oriented metadata model, relying on web technology and supported by database engineering, that aims at consistent and human-readable documentation of standardized computational workflows. The suggested approach features, e.g., computational validation of metadata records, automatic error detection, and progress monitoring of multi-step analyses, and was successfully field-tested as part of a large epigenome research consortium. This work leaves aside theoretical considerations, and intentionally emphasizes the realistic need of providing scientists with tools that assist them in performing reproducible research. Irrespective of the technological progress, the dynamic and cell-type specific nature of the epigenome commonly requires restricting the number of analyzed samples due to resource limitations. The second project of this thesis introduces the software tool SCIDDO, which has been developed for the differential chromatin analysis of cellular samples with potentially limited availability. By combining statistics, algorithmics, and best practices for robust software development, SCIDDO can quickly identify biologically meaningful regions of differential chromatin marking between cell types. We demonstrate SCIDDO's usefulness in an exemplary study in which we identify regions that establish a link between chromatin and gene expression changes. SCIDDO's quantitative approach to differential chromatin analysis is user-customizable, providing the necessary flexibility to adapt SCIDDO to specific research tasks. Given the functional diversity of cell types and the dynamics of the epigenome in response to environmental changes, it is hardly realistic to map the complete epigenome even for a single organism like human or mouse. For non-model organisms, e.g., cow, pig, or dog, epigenome data is particularly scarce. The third project of this thesis investigates to what extent bioinformatics methods can compensate for the comparatively little effort that is invested in charting the epigenome of non-model species. This study implements a large integrative analysis pipeline, including state-of-the-art machine learning, to transfer chromatin data for predictive modeling between 13 species. The evidence presented here indicates that a partial regulatory epigenetic signal is stably retained even over millions of years of evolutionary distance between the considered species. This finding suggests complementary and cost-effective ways for bioinformatics to contribute to comparative epigenome analysis across species boundaries. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27387
[8]
D. Gupta, “Search and Analytics Using Semantic Annotations,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Search systems help users locate relevant information in the form of text documents for keyword queries. Using text alone, it is often difficult to satisfy the user's information need. To discern the user's intent behind queries, we turn to semantic annotations (e.g., named entities and temporal expressions) that natural language processing tools can now deliver with great accuracy. This thesis develops methods and an infrastructure that leverage semantic annotations to efficiently and effectively search large document collections. This thesis makes contributions in three areas: indexing, querying, and mining of semantically annotated document collections. First, we describe an indexing infrastructure for semantically annotated document collections. The indexing infrastructure can support knowledge-centric tasks such as information extraction, relationship extraction, question answering, fact spotting and semantic search at scale across millions of documents. Second, we propose methods for exploring large document collections by suggesting semantic aspects for queries. These semantic aspects are generated by considering annotations in the form of temporal expressions, geographic locations, and other named entities. The generated aspects help guide the user to relevant documents without the need to read their contents. Third and finally, we present methods that can generate events, structured tables, and insightful visualizations from semantically annotated document collections.
Export
BibTeX
@phdthesis{GUPTAphd2019, TITLE = {Search and Analytics Using Semantic Annotations}, AUTHOR = {Gupta, Dhruv}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-300780}, DOI = {10.22028/D291-30078}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Search systems help users locate relevant information in the form of text documents for keyword queries. Using text alone, it is often difficult to satisfy the user's information need. To discern the user's intent behind queries, we turn to semantic annotations (e.g., named entities and temporal expressions) that natural language processing tools can now deliver with great accuracy. This thesis develops methods and an infrastructure that leverage semantic annotations to efficiently and effectively search large document collections. This thesis makes contributions in three areas: indexing, querying, and mining of semantically annotated document collections. First, we describe an indexing infrastructure for semantically annotated document collections. The indexing infrastructure can support knowledge-centric tasks such as information extraction, relationship extraction, question answering, fact spotting and semantic search at scale across millions of documents. Second, we propose methods for exploring large document collections by suggesting semantic aspects for queries. These semantic aspects are generated by considering annotations in the form of temporal expressions, geographic locations, and other named entities. The generated aspects help guide the user to relevant documents without the need to read their contents. Third and finally, we present methods that can generate events, structured tables, and insightful visualizations from semantically annotated document collections.}, }
Endnote
%0 Thesis %A Gupta, Dhruv %Y Berberich, Klaus %A referee: Weikum, Gerhard %A referee: Bedathur, Srikanta %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Search and Analytics Using Semantic Annotations : %G eng %U http://hdl.handle.net/21.11116/0000-0005-7695-E %R 10.22028/D291-30078 %U urn:nbn:de:bsz:291--ds-300780 %F OTHER: hdl:20.500.11880/28516 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P xxviii, 211 p. %V phd %9 phd %X Search systems help users locate relevant information in the form of text documents for keyword queries. Using text alone, it is often difficult to satisfy the user's information need. To discern the user's intent behind queries, we turn to semantic annotations (e.g., named entities and temporal expressions) that natural language processing tools can now deliver with great accuracy. This thesis develops methods and an infrastructure that leverage semantic annotations to efficiently and effectively search large document collections. This thesis makes contributions in three areas: indexing, querying, and mining of semantically annotated document collections. First, we describe an indexing infrastructure for semantically annotated document collections. The indexing infrastructure can support knowledge-centric tasks such as information extraction, relationship extraction, question answering, fact spotting and semantic search at scale across millions of documents. Second, we propose methods for exploring large document collections by suggesting semantic aspects for queries. These semantic aspects are generated by considering annotations in the form of temporal expressions, geographic locations, and other named entities. The generated aspects help guide the user to relevant documents without the need to read their contents. Third and finally, we present methods that can generate events, structured tables, and insightful visualizations from semantically annotated document collections. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28516
[9]
Y. Ibrahim, “Understanding Quantities in Web Tables and Text,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
There is a wealth of schema-free tables on the web. The text accompanying these tables explains and qualifies the numerical quantities given in the tables. Despite this ubiquity of tabular data, there is little research that harnesses this wealth of data by semantically understanding the information that is conveyed rather ambiguously in these tables. This information can be disambiguated only by the help of the accompanying text. In the process of understanding quantity mentions in tables and text, we are faced with the following challenges; First, there is no comprehensive knowledge base for anchoring quantity mentions. Second, tables are created ad-hoc without a standard schema and with ambiguous header names; also table cells usually contain abbreviations. Third, quantities can be written in multiple forms and units of measures. Fourth, the text usually refers to the quantities in tables using aggregation, approximation, and different scales. In this thesis, we target these challenges through the following contributions: - We present the Quantity Knowledge Base (QKB), a knowledge base for representing Quantity mentions. We construct the QKB by importing information from Freebase, Wikipedia, and other online sources. - We propose Equity: a system for automatically canonicalizing header names and cell values onto concepts, classes, entities, and uniquely represented quantities registered in a knowledge base. We devise a probabilistic graphical model that captures coherence dependencies between cells in tables and candidate items in the space of concepts, entities, and quantities. Then, we cast the inference problem into an efficient algorithm based on random walks over weighted graphs. baselines. - We introduce the quantity alignment problem: computing bidirectional links between textual mentions of quantities and the corresponding table cells. We propose BriQ: a system for computing such alignments. BriQ copes with the specific challenges of approximate quantities, aggregated quantities, and calculated quantities. - We design ExQuisiTe: a web application that identifies mentions of quantities in text and tables, aligns quantity mentions in the text with related quantity mentions in tables, and generates salient suggestions for extractive text summarization systems.
Export
BibTeX
@phdthesis{yusraphd2019, TITLE = {Understanding Quantities in Web Tables and Text}, AUTHOR = {Ibrahim, Yusra}, LANGUAGE = {eng}, DOI = {10.22028/D291-29657}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {There is a wealth of schema-free tables on the web. The text accompanying these tables explains and qualifies the numerical quantities given in the tables. Despite this ubiquity of tabular data, there is little research that harnesses this wealth of data by semantically understanding the information that is conveyed rather ambiguously in these tables. This information can be disambiguated only by the help of the accompanying text. In the process of understanding quantity mentions in tables and text, we are faced with the following challenges; First, there is no comprehensive knowledge base for anchoring quantity mentions. Second, tables are created ad-hoc without a standard schema and with ambiguous header names; also table cells usually contain abbreviations. Third, quantities can be written in multiple forms and units of measures. Fourth, the text usually refers to the quantities in tables using aggregation, approximation, and different scales. In this thesis, we target these challenges through the following contributions: -- We present the Quantity Knowledge Base (QKB), a knowledge base for representing Quantity mentions. We construct the QKB by importing information from Freebase, Wikipedia, and other online sources. -- We propose Equity: a system for automatically canonicalizing header names and cell values onto concepts, classes, entities, and uniquely represented quantities registered in a knowledge base. We devise a probabilistic graphical model that captures coherence dependencies between cells in tables and candidate items in the space of concepts, entities, and quantities. Then, we cast the inference problem into an efficient algorithm based on random walks over weighted graphs. baselines. -- We introduce the quantity alignment problem: computing bidirectional links between textual mentions of quantities and the corresponding table cells. We propose BriQ: a system for computing such alignments. BriQ copes with the specific challenges of approximate quantities, aggregated quantities, and calculated quantities. -- We design ExQuisiTe: a web application that identifies mentions of quantities in text and tables, aligns quantity mentions in the text with related quantity mentions in tables, and generates salient suggestions for extractive text summarization systems.}, }
Endnote
%0 Thesis %A Ibrahim, Yusra %Y Weikum, Gerhard %A referee: Riedewald, Mirek %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Understanding Quantities in Web Tables and Text : %G eng %U http://hdl.handle.net/21.11116/0000-0005-4384-A %R 10.22028/D291-29657 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 116 p. %V phd %9 phd %X There is a wealth of schema-free tables on the web. The text accompanying these tables explains and qualifies the numerical quantities given in the tables. Despite this ubiquity of tabular data, there is little research that harnesses this wealth of data by semantically understanding the information that is conveyed rather ambiguously in these tables. This information can be disambiguated only by the help of the accompanying text. In the process of understanding quantity mentions in tables and text, we are faced with the following challenges; First, there is no comprehensive knowledge base for anchoring quantity mentions. Second, tables are created ad-hoc without a standard schema and with ambiguous header names; also table cells usually contain abbreviations. Third, quantities can be written in multiple forms and units of measures. Fourth, the text usually refers to the quantities in tables using aggregation, approximation, and different scales. In this thesis, we target these challenges through the following contributions: - We present the Quantity Knowledge Base (QKB), a knowledge base for representing Quantity mentions. We construct the QKB by importing information from Freebase, Wikipedia, and other online sources. - We propose Equity: a system for automatically canonicalizing header names and cell values onto concepts, classes, entities, and uniquely represented quantities registered in a knowledge base. We devise a probabilistic graphical model that captures coherence dependencies between cells in tables and candidate items in the space of concepts, entities, and quantities. Then, we cast the inference problem into an efficient algorithm based on random walks over weighted graphs. baselines. - We introduce the quantity alignment problem: computing bidirectional links between textual mentions of quantities and the corresponding table cells. We propose BriQ: a system for computing such alignments. BriQ copes with the specific challenges of approximate quantities, aggregated quantities, and calculated quantities. - We design ExQuisiTe: a web application that identifies mentions of quantities in text and tables, aligns quantity mentions in the text with related quantity mentions in tables, and generates salient suggestions for extractive text summarization systems. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28300
[10]
D. Issac, “On some covering, partition and connectivity problems in graphs,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
We look at some graph problems related to covering, partition, and connectivity. First, we study the problems of covering and partitioning edges with bicliques, especially from the viewpoint of parameterized complexity. For the partition problem, we develop much more efficient algorithms than the ones previously known. In contrast, for the cover problem, our lower bounds show that the known algorithms are probably optimal. Next, we move on to graph coloring, which is probably the most extensively studied partition problem in graphs. Hadwiger’s conjecture is a long-standing open problem related to vertex coloring. We prove the conjecture for a special class of graphs, namely squares of 2-trees, and show that square graphs are important in connection with Hadwiger’s conjecture. Then, we study a coloring problem that has been emerging recently, called rainbow coloring. This problem lies in the intersection of coloring and connectivity. We study different variants of rainbow coloring and present bounds and complexity results on them. Finally, we move on to another parameter related to connectivity called spanning tree congestion (STC). We give tight bounds for STC in general graphs and random graphs. While proving the results on
Export
BibTeX
@phdthesis{Issacphd2019, TITLE = {On some covering, partition and connectivity problems in graphs}, AUTHOR = {Issac, Davis}, LANGUAGE = {eng}, DOI = {10.22028/D291-29620}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {We look at some graph problems related to covering, partition, and connectivity. First, we study the problems of covering and partitioning edges with bicliques, especially from the viewpoint of parameterized complexity. For the partition problem, we develop much more efficient algorithms than the ones previously known. In contrast, for the cover problem, our lower bounds show that the known algorithms are probably optimal. Next, we move on to graph coloring, which is probably the most extensively studied partition problem in graphs. Hadwiger{\textquoteright}s conjecture is a long-standing open problem related to vertex coloring. We prove the conjecture for a special class of graphs, namely squares of 2-trees, and show that square graphs are important in connection with Hadwiger{\textquoteright}s conjecture. Then, we study a coloring problem that has been emerging recently, called rainbow coloring. This problem lies in the intersection of coloring and connectivity. We study different variants of rainbow coloring and present bounds and complexity results on them. Finally, we move on to another parameter related to connectivity called spanning tree congestion (STC). We give tight bounds for STC in general graphs and random graphs. While proving the results on}, }
Endnote
%0 Thesis %A Issac, Davis %Y Karrenbauer, Andreas %A referee: Mehlhorn, Kurt %A referee: Chandran, L. Sunil %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On some covering, partition and connectivity problems in graphs : %G eng %U http://hdl.handle.net/21.11116/0000-0004-D665-9 %R 10.22028/D291-29620 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 191 p. %V phd %9 phd %X We look at some graph problems related to covering, partition, and connectivity. First, we study the problems of covering and partitioning edges with bicliques, especially from the viewpoint of parameterized complexity. For the partition problem, we develop much more efficient algorithms than the ones previously known. In contrast, for the cover problem, our lower bounds show that the known algorithms are probably optimal. Next, we move on to graph coloring, which is probably the most extensively studied partition problem in graphs. Hadwiger&#8217;s conjecture is a long-standing open problem related to vertex coloring. We prove the conjecture for a special class of graphs, namely squares of 2-trees, and show that square graphs are important in connection with Hadwiger&#8217;s conjecture. Then, we study a coloring problem that has been emerging recently, called rainbow coloring. This problem lies in the intersection of coloring and connectivity. We study different variants of rainbow coloring and present bounds and complexity results on them. Finally, we move on to another parameter related to connectivity called spanning tree congestion (STC). We give tight bounds for STC in general graphs and random graphs. While proving the results on %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28007
[11]
S. Karaev, “Matrix factorization over diods and its applications in data mining,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Matrix factorizations are an important tool in data mining, and they have been used extensively for finding latent patterns in the data. They often allow to separate structure from noise, as well as to considerably reduce the dimensionality of the input matrix. While classical matrix decomposition methods, such as nonnegative matrix factorization (NMF) and singular value decomposition (SVD), proved to be very useful in data analysis, they are limited by the underlying algebraic structure. NMF, in particular, tends to break patterns into smaller bits, often mixing them with each other. This happens because overlapping patterns interfere with each other, making it harder to tell them apart. In this thesis we study matrix factorization over algebraic structures known as dioids, which are characterized by the lack of additive inverse (“negative numbers”) and the idempotency of addition (a + a = a). Using dioids makes it easier to separate overlapping features, and, in particular, it allows to better deal with the above mentioned pattern breaking problem. We consider different types of dioids, that range from continuous (subtropical and tropical algebras) to discrete (Boolean algebra). Among these, the Boolean algebra is perhaps the most well known, and there exist methods that allow one to obtain high quality Boolean matrix factorizations in terms of the reconstruction error. In this work, however, a different objective function is used – the description length of the data, which enables us to obtain compact and highly interpretable results. The tropical and subtropical algebras, on the other hand, are much less known in the data mining field. While they find applications in areas such as job scheduling and discrete event systems, they are virtually unknown in the context of data analysis. We will use them to obtain idempotent nonnegative factorizations that are similar to NMF, but are better at separating the most prominent features of the data.
Export
BibTeX
@phdthesis{Karaevphd2019, TITLE = {Matrix factorization over diods and its applications in data mining}, AUTHOR = {Karaev, Sanjar}, LANGUAGE = {eng}, DOI = {10.22028/D291-28661}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Matrix factorizations are an important tool in data mining, and they have been used extensively for finding latent patterns in the data. They often allow to separate structure from noise, as well as to considerably reduce the dimensionality of the input matrix. While classical matrix decomposition methods, such as nonnegative matrix factorization (NMF) and singular value decomposition (SVD), proved to be very useful in data analysis, they are limited by the underlying algebraic structure. NMF, in particular, tends to break patterns into smaller bits, often mixing them with each other. This happens because overlapping patterns interfere with each other, making it harder to tell them apart. In this thesis we study matrix factorization over algebraic structures known as dioids, which are characterized by the lack of additive inverse ({\textquotedblleft}negative numbers{\textquotedblright}) and the idempotency of addition (a + a = a). Using dioids makes it easier to separate overlapping features, and, in particular, it allows to better deal with the above mentioned pattern breaking problem. We consider different types of dioids, that range from continuous (subtropical and tropical algebras) to discrete (Boolean algebra). Among these, the Boolean algebra is perhaps the most well known, and there exist methods that allow one to obtain high quality Boolean matrix factorizations in terms of the reconstruction error. In this work, however, a different objective function is used -- the description length of the data, which enables us to obtain compact and highly interpretable results. The tropical and subtropical algebras, on the other hand, are much less known in the data mining field. While they find applications in areas such as job scheduling and discrete event systems, they are virtually unknown in the context of data analysis. We will use them to obtain idempotent nonnegative factorizations that are similar to NMF, but are better at separating the most prominent features of the data.}, }
Endnote
%0 Thesis %A Karaev, Sanjar %Y Miettinen, Pauli %A referee: Weikum, Gerhard %A referee: van Leeuwen, Matthijs %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Matrix factorization over diods and its applications in data mining : %G eng %U http://hdl.handle.net/21.11116/0000-0005-4369-A %R 10.22028/D291-28661 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 113 p. %V phd %9 phd %X Matrix factorizations are an important tool in data mining, and they have been used extensively for finding latent patterns in the data. They often allow to separate structure from noise, as well as to considerably reduce the dimensionality of the input matrix. While classical matrix decomposition methods, such as nonnegative matrix factorization (NMF) and singular value decomposition (SVD), proved to be very useful in data analysis, they are limited by the underlying algebraic structure. NMF, in particular, tends to break patterns into smaller bits, often mixing them with each other. This happens because overlapping patterns interfere with each other, making it harder to tell them apart. In this thesis we study matrix factorization over algebraic structures known as dioids, which are characterized by the lack of additive inverse (&#8220;negative numbers&#8221;) and the idempotency of addition (a + a = a). Using dioids makes it easier to separate overlapping features, and, in particular, it allows to better deal with the above mentioned pattern breaking problem. We consider different types of dioids, that range from continuous (subtropical and tropical algebras) to discrete (Boolean algebra). Among these, the Boolean algebra is perhaps the most well known, and there exist methods that allow one to obtain high quality Boolean matrix factorizations in terms of the reconstruction error. In this work, however, a different objective function is used &#8211; the description length of the data, which enables us to obtain compact and highly interpretable results. The tropical and subtropical algebras, on the other hand, are much less known in the data mining field. While they find applications in areas such as job scheduling and discrete event systems, they are virtually unknown in the context of data analysis. We will use them to obtain idempotent nonnegative factorizations that are similar to NMF, but are better at separating the most prominent features of the data. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27903
[12]
T. Leimkühler, “Artificial Intelligence for Efficient Image-based View Synthesis,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Synthesizing novel views from image data is a widely investigated topic in both computer graphics and computer vision, and has many applications like stereo or multi-view rendering for virtual reality, light field reconstruction, and image post-processing. While image-based approaches have the advantage of reduced computational load compared to classical model-based rendering, efficiency is still a major concern. This thesis demonstrates how concepts and tools from artificial intelligence can be used to increase the efficiency of image-based view synthesis algorithms. In particular it is shown how machine learning can help to generate point patterns useful for a variety of computer graphics tasks, how path planning can guide image warping, how sparsity-enforcing optimization can lead to significant speedups in interactive distribution effect rendering, and how probabilistic inference can be used to perform real-time 2D-to-3D conversion.
Export
BibTeX
@phdthesis{Leimphd2019, TITLE = {Artificial Intelligence for Efficient Image-based View Synthesis}, AUTHOR = {Leimk{\"u}hler, Thomas}, LANGUAGE = {eng}, DOI = {10.22028/D291-28379}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Synthesizing novel views from image data is a widely investigated topic in both computer graphics and computer vision, and has many applications like stereo or multi-view rendering for virtual reality, light field reconstruction, and image post-processing. While image-based approaches have the advantage of reduced computational load compared to classical model-based rendering, efficiency is still a major concern. This thesis demonstrates how concepts and tools from artificial intelligence can be used to increase the efficiency of image-based view synthesis algorithms. In particular it is shown how machine learning can help to generate point patterns useful for a variety of computer graphics tasks, how path planning can guide image warping, how sparsity-enforcing optimization can lead to significant speedups in interactive distribution effect rendering, and how probabilistic inference can be used to perform real-time 2D-to-3D conversion.}, }
Endnote
%0 Thesis %A Leimk&#252;hler, Thomas %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %A referee: Lensch, Hendrik %A referee: Drettakis, George %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Artificial Intelligence for Efficient Image-based View Synthesis : %G eng %U http://hdl.handle.net/21.11116/0000-0004-A589-7 %R 10.22028/D291-28379 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 136 p. %V phd %9 phd %X Synthesizing novel views from image data is a widely investigated topic in both computer graphics and computer vision, and has many applications like stereo or multi-view rendering for virtual reality, light field reconstruction, and image post-processing. While image-based approaches have the advantage of reduced computational load compared to classical model-based rendering, efficiency is still a major concern. This thesis demonstrates how concepts and tools from artificial intelligence can be used to increase the efficiency of image-based view synthesis algorithms. In particular it is shown how machine learning can help to generate point patterns useful for a variety of computer graphics tasks, how path planning can guide image warping, how sparsity-enforcing optimization can lead to significant speedups in interactive distribution effect rendering, and how probabilistic inference can be used to perform real-time 2D-to-3D conversion. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27664
[13]
E. Levinkov, “Generalizations of the Multicut Problem for Computer Vision,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Graph decomposition has always been a very important concept in machine learning and computer vision. Many tasks like image and mesh segmentation, community detection in social networks, as well as object tracking and human pose estimation can be formulated as a graph decomposition problem. The multicut problem in particular is a popular model to optimize for a decomposition of a given graph. Its main advantage is that no prior knowledge about the number of components or their sizes is required. However, it has several limitations, which we address in this thesis: Firstly, the multicut problem allows to specify only cost or reward for putting two direct neighbours into distinct components. This limits the expressibility of the cost function. We introduce special edges into the graph that allow to define cost or reward for putting any two vertices into distinct components, while preserving the original set of feasible solutions. We show that this considerably improves the quality of image and mesh segmentations. Second, multicut is notorious to be NP-hard for general graphs, that limits its applications to small super-pixel graphs. We define and implement two primal feasible heuristics to solve the problem. They do not provide any guarantees on the runtime or quality of solutions, but in practice show good convergence behaviour. We perform an extensive comparison on multiple graphs of different sizes and properties. Third, we extend the multicut framework by introducing node labels, so that we can jointly optimize for graph decomposition and nodes classification by means of exactly the same optimization algorithm, thus eliminating the need to hand-tune optimizers for a particular task. To prove its universality we applied it to diverse computer vision tasks, including human pose estimation, multiple object tracking, and instance-aware semantic segmentation. We show that we can improve the results over the prior art using exactly the same data as in the original works. Finally, we use employ multicuts in two applications: 1) a client-server tool for interactive video segmentation: After the pre-processing of the video a user draws strokes on several frames and a time-coherent segmentation of the entire video is performed on-the-fly. 2) we formulate a method for simultaneous segmentation and tracking of living cells in microscopy data. This task is challenging as cells split and our algorithm accounts for this, creating parental hierarchies. We also present results on multiple model fitting. We find models in data heavily corrupted by noise by finding components defining these models using higher order multicuts. We introduce an interesting extension that allows our optimization to pick better hyperparameters for each discovered model. In summary, this thesis extends the multicut problem in different directions, proposes algorithms for optimization, and applies it to novel data and settings.
Export
BibTeX
@phdthesis{Levinkovphd2013, TITLE = {Generalizations of the Multicut Problem for Computer Vision}, AUTHOR = {Levinkov, Evgeny}, LANGUAGE = {eng}, DOI = {10.22028/D291-27909}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Graph decomposition has always been a very important concept in machine learning and computer vision. Many tasks like image and mesh segmentation, community detection in social networks, as well as object tracking and human pose estimation can be formulated as a graph decomposition problem. The multicut problem in particular is a popular model to optimize for a decomposition of a given graph. Its main advantage is that no prior knowledge about the number of components or their sizes is required. However, it has several limitations, which we address in this thesis: Firstly, the multicut problem allows to specify only cost or reward for putting two direct neighbours into distinct components. This limits the expressibility of the cost function. We introduce special edges into the graph that allow to define cost or reward for putting any two vertices into distinct components, while preserving the original set of feasible solutions. We show that this considerably improves the quality of image and mesh segmentations. Second, multicut is notorious to be NP-hard for general graphs, that limits its applications to small super-pixel graphs. We define and implement two primal feasible heuristics to solve the problem. They do not provide any guarantees on the runtime or quality of solutions, but in practice show good convergence behaviour. We perform an extensive comparison on multiple graphs of different sizes and properties. Third, we extend the multicut framework by introducing node labels, so that we can jointly optimize for graph decomposition and nodes classification by means of exactly the same optimization algorithm, thus eliminating the need to hand-tune optimizers for a particular task. To prove its universality we applied it to diverse computer vision tasks, including human pose estimation, multiple object tracking, and instance-aware semantic segmentation. We show that we can improve the results over the prior art using exactly the same data as in the original works. Finally, we use employ multicuts in two applications: 1) a client-server tool for interactive video segmentation: After the pre-processing of the video a user draws strokes on several frames and a time-coherent segmentation of the entire video is performed on-the-fly. 2) we formulate a method for simultaneous segmentation and tracking of living cells in microscopy data. This task is challenging as cells split and our algorithm accounts for this, creating parental hierarchies. We also present results on multiple model fitting. We find models in data heavily corrupted by noise by finding components defining these models using higher order multicuts. We introduce an interesting extension that allows our optimization to pick better hyperparameters for each discovered model. In summary, this thesis extends the multicut problem in different directions, proposes algorithms for optimization, and applies it to novel data and settings.}, }
Endnote
%0 Thesis %A Levinkov, Evgeny %Y Andres, Bjoern %A referee: Lempitsky, Victor %A referee: Rother, Carsten %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Generalizations of the Multicut Problem for Computer Vision : %G eng %U http://hdl.handle.net/21.11116/0000-0003-9B27-3 %R 10.22028/D291-27909 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 151 p. %V phd %9 phd %X Graph decomposition has always been a very important concept in machine learning and computer vision. Many tasks like image and mesh segmentation, community detection in social networks, as well as object tracking and human pose estimation can be formulated as a graph decomposition problem. The multicut problem in particular is a popular model to optimize for a decomposition of a given graph. Its main advantage is that no prior knowledge about the number of components or their sizes is required. However, it has several limitations, which we address in this thesis: Firstly, the multicut problem allows to specify only cost or reward for putting two direct neighbours into distinct components. This limits the expressibility of the cost function. We introduce special edges into the graph that allow to define cost or reward for putting any two vertices into distinct components, while preserving the original set of feasible solutions. We show that this considerably improves the quality of image and mesh segmentations. Second, multicut is notorious to be NP-hard for general graphs, that limits its applications to small super-pixel graphs. We define and implement two primal feasible heuristics to solve the problem. They do not provide any guarantees on the runtime or quality of solutions, but in practice show good convergence behaviour. We perform an extensive comparison on multiple graphs of different sizes and properties. Third, we extend the multicut framework by introducing node labels, so that we can jointly optimize for graph decomposition and nodes classification by means of exactly the same optimization algorithm, thus eliminating the need to hand-tune optimizers for a particular task. To prove its universality we applied it to diverse computer vision tasks, including human pose estimation, multiple object tracking, and instance-aware semantic segmentation. We show that we can improve the results over the prior art using exactly the same data as in the original works. Finally, we use employ multicuts in two applications: 1) a client-server tool for interactive video segmentation: After the pre-processing of the video a user draws strokes on several frames and a time-coherent segmentation of the entire video is performed on-the-fly. 2) we formulate a method for simultaneous segmentation and tracking of living cells in microscopy data. This task is challenging as cells split and our algorithm accounts for this, creating parental hierarchies. We also present results on multiple model fitting. We find models in data heavily corrupted by noise by finding components defining these models using higher order multicuts. We introduce an interesting extension that allows our optimization to pick better hyperparameters for each discovered model. In summary, this thesis extends the multicut problem in different directions, proposes algorithms for optimization, and applies it to novel data and settings. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27415
[14]
S. Nikumbh, “Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
With the development of chromosome conformation capture-based techniques, we now know that chromatin is packed in three-dimensional (3D) space inside the cell nucleus. Changes in the 3D chromatin architecture have already been implicated in diseases such as cancer. Thus, a better understanding of this 3D conformation is of interest to help enhance our comprehension of the complex, multipronged regulatory mechanisms of the genome. The work described in this dissertation largely focuses on development and application of interpretable machine learning methods for prediction and analysis of long-range genomic interactions output from chromatin interaction experiments. In the first part, we demonstrate that the genetic sequence information at the ge- nomic loci is predictive of the long-range interactions of a particular locus of interest (LoI). For example, the genetic sequence information at and around enhancers can help predict whether it interacts with a promoter region of interest. This is achieved by building string kernel-based support vector classifiers together with two novel, in- tuitive visualization methods. These models suggest a potential general role of short tandem repeat motifs in the 3D genome organization. But, the insights gained out of these models are still coarse-grained. To this end, we devised a machine learning method, called CoMIK for Conformal Multi-Instance Kernels, capable of providing more fine-grained insights. When comparing sequences of variable length in the su- pervised learning setting, CoMIK can not only identify the features important for classification but also locate them within the sequence. Such precise identification of important segments of the whole sequence can help in gaining de novo insights into any role played by the intervening chromatin towards long-range interactions. Although CoMIK primarily uses only genetic sequence information, it can also si- multaneously utilize other information modalities such as the numerous functional genomics data if available. The second part describes our pipeline, pHDee, for easy manipulation of large amounts of 3D genomics data. We used the pipeline for analyzing HiChIP experimen- tal data for studying the 3D architectural changes in Ewing sarcoma (EWS) which is a rare cancer affecting adolescents. In particular, HiChIP data for two experimen- tal conditions, doxycycline-treated and untreated, and for primary tumor samples is analyzed. We demonstrate that pHDee facilitates processing and easy integration of large amounts of 3D genomics data analysis together with other data-intensive bioinformatics analyses.
Export
BibTeX
@phdthesis{Nikumbhphd2019, TITLE = {Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in {3D}}, AUTHOR = {Nikumbh, Sarvesh}, LANGUAGE = {eng}, DOI = {10.22028/D291-28153}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {With the development of chromosome conformation capture-based techniques, we now know that chromatin is packed in three-dimensional (3D) space inside the cell nucleus. Changes in the 3D chromatin architecture have already been implicated in diseases such as cancer. Thus, a better understanding of this 3D conformation is of interest to help enhance our comprehension of the complex, multipronged regulatory mechanisms of the genome. The work described in this dissertation largely focuses on development and application of interpretable machine learning methods for prediction and analysis of long-range genomic interactions output from chromatin interaction experiments. In the first part, we demonstrate that the genetic sequence information at the ge- nomic loci is predictive of the long-range interactions of a particular locus of interest (LoI). For example, the genetic sequence information at and around enhancers can help predict whether it interacts with a promoter region of interest. This is achieved by building string kernel-based support vector classifiers together with two novel, in- tuitive visualization methods. These models suggest a potential general role of short tandem repeat motifs in the 3D genome organization. But, the insights gained out of these models are still coarse-grained. To this end, we devised a machine learning method, called CoMIK for Conformal Multi-Instance Kernels, capable of providing more fine-grained insights. When comparing sequences of variable length in the su- pervised learning setting, CoMIK can not only identify the features important for classification but also locate them within the sequence. Such precise identification of important segments of the whole sequence can help in gaining de novo insights into any role played by the intervening chromatin towards long-range interactions. Although CoMIK primarily uses only genetic sequence information, it can also si- multaneously utilize other information modalities such as the numerous functional genomics data if available. The second part describes our pipeline, pHDee, for easy manipulation of large amounts of 3D genomics data. We used the pipeline for analyzing HiChIP experimen- tal data for studying the 3D architectural changes in Ewing sarcoma (EWS) which is a rare cancer affecting adolescents. In particular, HiChIP data for two experimen- tal conditions, doxycycline-treated and untreated, and for primary tumor samples is analyzed. We demonstrate that pHDee facilitates processing and easy integration of large amounts of 3D genomics data analysis together with other data-intensive bioinformatics analyses.}, }
Endnote
%0 Thesis %A Nikumbh, Sarvesh %Y Pfeifer, Nico %A referee: Marschall, Tobias %A referee: Ebert, Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D : %G eng %U http://hdl.handle.net/21.11116/0000-0004-A5CE-A %R 10.22028/D291-28153 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 150 p. %V phd %9 phd %X With the development of chromosome conformation capture-based techniques, we now know that chromatin is packed in three-dimensional (3D) space inside the cell nucleus. Changes in the 3D chromatin architecture have already been implicated in diseases such as cancer. Thus, a better understanding of this 3D conformation is of interest to help enhance our comprehension of the complex, multipronged regulatory mechanisms of the genome. The work described in this dissertation largely focuses on development and application of interpretable machine learning methods for prediction and analysis of long-range genomic interactions output from chromatin interaction experiments. In the first part, we demonstrate that the genetic sequence information at the ge- nomic loci is predictive of the long-range interactions of a particular locus of interest (LoI). For example, the genetic sequence information at and around enhancers can help predict whether it interacts with a promoter region of interest. This is achieved by building string kernel-based support vector classifiers together with two novel, in- tuitive visualization methods. These models suggest a potential general role of short tandem repeat motifs in the 3D genome organization. But, the insights gained out of these models are still coarse-grained. To this end, we devised a machine learning method, called CoMIK for Conformal Multi-Instance Kernels, capable of providing more fine-grained insights. When comparing sequences of variable length in the su- pervised learning setting, CoMIK can not only identify the features important for classification but also locate them within the sequence. Such precise identification of important segments of the whole sequence can help in gaining de novo insights into any role played by the intervening chromatin towards long-range interactions. Although CoMIK primarily uses only genetic sequence information, it can also si- multaneously utilize other information modalities such as the numerous functional genomics data if available. The second part describes our pipeline, pHDee, for easy manipulation of large amounts of 3D genomics data. We used the pipeline for analyzing HiChIP experimen- tal data for studying the 3D architectural changes in Ewing sarcoma (EWS) which is a rare cancer affecting adolescents. In particular, HiChIP data for two experimen- tal conditions, doxycycline-treated and untreated, and for primary tumor samples is analyzed. We demonstrate that pHDee facilitates processing and easy integration of large amounts of 3D genomics data analysis together with other data-intensive bioinformatics analyses. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27471
[15]
K. Popat, “Credibility Analysis of Textual Claimswith Explainable Evidence,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Despite being a vast resource of valuable information, the Web has been polluted by the spread of false claims. Increasing hoaxes, fake news, and misleading information on the Web have given rise to many fact-checking websites that manually assess these doubtful claims. However, the rapid speed and large scale of misinformation spread have become the bottleneck for manual verification. This calls for credibility assessment tools that can automate this verification process. Prior works in this domain make strong assumptions about the structure of the claims and the communities where they are made. Most importantly, black-box techniques proposed in prior works lack the ability to explain why a certain statement is deemed credible or not. To address these limitations, this dissertation proposes a general framework for automated credibility assessment that does not make any assumption about the structure or origin of the claims. Specifically, we propose a feature-based model, which automatically retrieves relevant articles about the given claim and assesses its credibility by capturing the mutual interaction between the language style of the relevant articles, their stance towards the claim, and the trustworthiness of the underlying web sources. We further enhance our credibility assessment approach and propose a neural-network-based model. Unlike the feature-based model, this model does not rely on feature engineering and external lexicons. Both our models make their assessments interpretable by extracting explainable evidence from judiciously selected web sources. We utilize our models and develop a Web interface, CredEye, which enables users to automatically assess the credibility of a textual claim and dissect into the assessment by browsing through judiciously and automatically selected evidence snippets. In addition, we study the problem of stance classification and propose a neural-network-based model for predicting the stance of diverse user perspectives regarding the controversial claims. Given a controversial claim and a user comment, our stance classification model predicts whether the user comment is supporting or opposing the claim.
Export
BibTeX
@phdthesis{Popatphd2019, TITLE = {Credibility Analysis of Textual Claimswith Explainable Evidence}, AUTHOR = {Popat, Kashyap}, LANGUAGE = {eng}, DOI = {10.22028/D291-30005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Despite being a vast resource of valuable information, the Web has been polluted by the spread of false claims. Increasing hoaxes, fake news, and misleading information on the Web have given rise to many fact-checking websites that manually assess these doubtful claims. However, the rapid speed and large scale of misinformation spread have become the bottleneck for manual verification. This calls for credibility assessment tools that can automate this verification process. Prior works in this domain make strong assumptions about the structure of the claims and the communities where they are made. Most importantly, black-box techniques proposed in prior works lack the ability to explain why a certain statement is deemed credible or not. To address these limitations, this dissertation proposes a general framework for automated credibility assessment that does not make any assumption about the structure or origin of the claims. Specifically, we propose a feature-based model, which automatically retrieves relevant articles about the given claim and assesses its credibility by capturing the mutual interaction between the language style of the relevant articles, their stance towards the claim, and the trustworthiness of the underlying web sources. We further enhance our credibility assessment approach and propose a neural-network-based model. Unlike the feature-based model, this model does not rely on feature engineering and external lexicons. Both our models make their assessments interpretable by extracting explainable evidence from judiciously selected web sources. We utilize our models and develop a Web interface, CredEye, which enables users to automatically assess the credibility of a textual claim and dissect into the assessment by browsing through judiciously and automatically selected evidence snippets. In addition, we study the problem of stance classification and propose a neural-network-based model for predicting the stance of diverse user perspectives regarding the controversial claims. Given a controversial claim and a user comment, our stance classification model predicts whether the user comment is supporting or opposing the claim.}, }
Endnote
%0 Thesis %A Popat, Kashyap %Y Weikum, Gerhard %A referee: Naumann, Felix %A referee: Yates, Andrew %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Credibility Analysis of Textual Claimswith Explainable Evidence : %G eng %U http://hdl.handle.net/21.11116/0000-0005-654D-4 %R 10.22028/D291-30005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 134 p. %V phd %9 phd %X Despite being a vast resource of valuable information, the Web has been polluted by the spread of false claims. Increasing hoaxes, fake news, and misleading information on the Web have given rise to many fact-checking websites that manually assess these doubtful claims. However, the rapid speed and large scale of misinformation spread have become the bottleneck for manual verification. This calls for credibility assessment tools that can automate this verification process. Prior works in this domain make strong assumptions about the structure of the claims and the communities where they are made. Most importantly, black-box techniques proposed in prior works lack the ability to explain why a certain statement is deemed credible or not. To address these limitations, this dissertation proposes a general framework for automated credibility assessment that does not make any assumption about the structure or origin of the claims. Specifically, we propose a feature-based model, which automatically retrieves relevant articles about the given claim and assesses its credibility by capturing the mutual interaction between the language style of the relevant articles, their stance towards the claim, and the trustworthiness of the underlying web sources. We further enhance our credibility assessment approach and propose a neural-network-based model. Unlike the feature-based model, this model does not rely on feature engineering and external lexicons. Both our models make their assessments interpretable by extracting explainable evidence from judiciously selected web sources. We utilize our models and develop a Web interface, CredEye, which enables users to automatically assess the credibility of a textual claim and dissect into the assessment by browsing through judiciously and automatically selected evidence snippets. In addition, we study the problem of stance classification and propose a neural-network-based model for predicting the stance of diverse user perspectives regarding the controversial claims. Given a controversial claim and a user comment, our stance classification model predicts whether the user comment is supporting or opposing the claim. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28481
[16]
N. Robertini, “Model-based Human Performance Capture in Outdoor Scenes,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Technologies for motion and performance capture of real actors have enabled the creation of realisticlooking virtual humans through detail and deformation transfer at the cost of extensive manual work and sophisticated in-studio marker-based systems. This thesis pushes the boundaries of performance capture by proposing automatic algorithms for robust 3D skeleton and detailed surface tracking in less constrained multi-view outdoor scenarios. Contributions include new multi-layered human body representations designed for effective model-based time-consistent reconstruction in complex dynamic environments with varying illumination, from a set of vision cameras. We design dense surface refinement approaches to enable smooth silhouette-free model-to-image alignment, as well as coarse-to-fine tracking techniques to enable joint estimation of skeleton motion and finescale surface deformations in complicated scenarios. High-quality results attained on challenging application scenarios confirm the contributions and show great potential for the automatic creation of personalized 3D virtual humans.
Export
BibTeX
@phdthesis{Robertini_PhD2019, TITLE = {Model-based Human Performance Capture in Outdoor Scenes}, AUTHOR = {Robertini, Nadia}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-285887}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Technologies for motion and performance capture of real actors have enabled the creation of realisticlooking virtual humans through detail and deformation transfer at the cost of extensive manual work and sophisticated in-studio marker-based systems. This thesis pushes the boundaries of performance capture by proposing automatic algorithms for robust 3D skeleton and detailed surface tracking in less constrained multi-view outdoor scenarios. Contributions include new multi-layered human body representations designed for effective model-based time-consistent reconstruction in complex dynamic environments with varying illumination, from a set of vision cameras. We design dense surface refinement approaches to enable smooth silhouette-free model-to-image alignment, as well as coarse-to-fine tracking techniques to enable joint estimation of skeleton motion and finescale surface deformations in complicated scenarios. High-quality results attained on challenging application scenarios confirm the contributions and show great potential for the automatic creation of personalized 3D virtual humans.}, }
Endnote
%0 Thesis %A Robertini, Nadia %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Model-based Human Performance Capture in Outdoor Scenes : %G eng %U http://hdl.handle.net/21.11116/0000-0004-9B2E-B %U urn:nbn:de:bsz:291--ds-285887 %F OTHER: hdl:20.500.11880/27667 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P XIX, 136, XI p. %V phd %9 phd %X Technologies for motion and performance capture of real actors have enabled the creation of realisticlooking virtual humans through detail and deformation transfer at the cost of extensive manual work and sophisticated in-studio marker-based systems. This thesis pushes the boundaries of performance capture by proposing automatic algorithms for robust 3D skeleton and detailed surface tracking in less constrained multi-view outdoor scenarios. Contributions include new multi-layered human body representations designed for effective model-based time-consistent reconstruction in complex dynamic environments with varying illumination, from a set of vision cameras. We design dense surface refinement approaches to enable smooth silhouette-free model-to-image alignment, as well as coarse-to-fine tracking techniques to enable joint estimation of skeleton motion and finescale surface deformations in complicated scenarios. High-quality results attained on challenging application scenarios confirm the contributions and show great potential for the automatic creation of personalized 3D virtual humans. %U https://scidok.sulb.uni-saarland.de/handle/20.500.11880/27667
[17]
H. Sattar, “Intents and Preferences Prediction Based on Implicit Human Cues,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Visual search is an important task, and it is part of daily human life. Thus, it has been a long-standing goal in Computer Vision to develop methods aiming at analysing human search intent and preferences. As the target of the search only exists in mind of the person, search intent prediction remains challenging for machine perception. In this thesis, we focus on advancing techniques for search target and preference prediction from implicit human cues. First, we propose a search target inference algorithm from human fixation data recorded during visual search. In contrast to previous work that has focused on individual instances as a search target in a closed world, we propose the first approach to predict the search target in open-world settings by learning the compatibility between observed fixations and potential search targets. Second, we further broaden the scope of search target prediction to categorical classes, such as object categories and attributes. However, state of the art models for categorical recognition, in general, require large amounts of training data, which is prohibitive for gaze data. To address this challenge, we propose a novel Gaze Pooling Layer that integrates gaze information into CNN-based architectures as an attention mechanism – incorporating both spatial and temporal aspects of human gaze behaviour. Third, we go one step further and investigate the feasibility of combining our gaze embedding approach, with the power of generative image models to visually decode, i.e. create a visual representation of, the search target. Forth, for the first time, we studied the effect of body shape on people preferences of outfits. We propose a novel and robust multi-photo approach to estimate the body shapes of each user and build a conditional model of clothing categories given body-shape. We demonstrate that in real-world data, clothing categories and body-shapes are correlated. We show that our approach estimates a realistic looking body shape that captures a user’s weight group and body shape type, even from a single image of a clothed person. However, an accurate depiction of the naked body is considered highly private and therefore, might not be consented by most people. First, we studied the perception of such technology via a user study. Then, in the last part of this thesis, we ask if the automatic extraction of such information can be effectively evaded. In summary, this thesis addresses several different tasks that aims to enable the vision system to analyse human search intent and preferences in real-world scenarios. In particular, the thesis proposes several novel ideas and models in visual search target prediction from human fixation data, for the first time studied the correlation between shape and clothing categories opening a new direction in clothing recommendation systems, and introduces a new topic in privacy and computer vision, aimed at preventing automatic 3D shape extraction from images.
Export
BibTeX
@phdthesis{Sattar_PhD2019, TITLE = {Intents and Preferences Prediction Based on Implicit Human Cues}, AUTHOR = {Sattar, Hosnieh}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-281920}, DOI = {10.22028/D291-28192}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Visual search is an important task, and it is part of daily human life. Thus, it has been a long-standing goal in Computer Vision to develop methods aiming at analysing human search intent and preferences. As the target of the search only exists in mind of the person, search intent prediction remains challenging for machine perception. In this thesis, we focus on advancing techniques for search target and preference prediction from implicit human cues. First, we propose a search target inference algorithm from human fixation data recorded during visual search. In contrast to previous work that has focused on individual instances as a search target in a closed world, we propose the first approach to predict the search target in open-world settings by learning the compatibility between observed fixations and potential search targets. Second, we further broaden the scope of search target prediction to categorical classes, such as object categories and attributes. However, state of the art models for categorical recognition, in general, require large amounts of training data, which is prohibitive for gaze data. To address this challenge, we propose a novel Gaze Pooling Layer that integrates gaze information into CNN-based architectures as an attention mechanism -- incorporating both spatial and temporal aspects of human gaze behaviour. Third, we go one step further and investigate the feasibility of combining our gaze embedding approach, with the power of generative image models to visually decode, i.e. create a visual representation of, the search target. Forth, for the first time, we studied the effect of body shape on people preferences of outfits. We propose a novel and robust multi-photo approach to estimate the body shapes of each user and build a conditional model of clothing categories given body-shape. We demonstrate that in real-world data, clothing categories and body-shapes are correlated. We show that our approach estimates a realistic looking body shape that captures a user{\textquoteright}s weight group and body shape type, even from a single image of a clothed person. However, an accurate depiction of the naked body is considered highly private and therefore, might not be consented by most people. First, we studied the perception of such technology via a user study. Then, in the last part of this thesis, we ask if the automatic extraction of such information can be effectively evaded. In summary, this thesis addresses several different tasks that aims to enable the vision system to analyse human search intent and preferences in real-world scenarios. In particular, the thesis proposes several novel ideas and models in visual search target prediction from human fixation data, for the first time studied the correlation between shape and clothing categories opening a new direction in clothing recommendation systems, and introduces a new topic in privacy and computer vision, aimed at preventing automatic 3D shape extraction from images.}, }
Endnote
%0 Thesis %A Sattar, Hosnieh %Y Fritz, Mario %A referee: Schiele, Bernt %A referee: Sugano, Yusuke %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T Intents and Preferences Prediction Based on Implicit Human Cues : %G eng %U http://hdl.handle.net/21.11116/0000-0004-8E7F-F %R 10.22028/D291-28192 %U urn:nbn:de:bsz:291--ds-281920 %F OTHER: hdl:20.500.11880/27625 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P X, 136 p. %V phd %9 phd %X Visual search is an important task, and it is part of daily human life. Thus, it has been a long-standing goal in Computer Vision to develop methods aiming at analysing human search intent and preferences. As the target of the search only exists in mind of the person, search intent prediction remains challenging for machine perception. In this thesis, we focus on advancing techniques for search target and preference prediction from implicit human cues. First, we propose a search target inference algorithm from human fixation data recorded during visual search. In contrast to previous work that has focused on individual instances as a search target in a closed world, we propose the first approach to predict the search target in open-world settings by learning the compatibility between observed fixations and potential search targets. Second, we further broaden the scope of search target prediction to categorical classes, such as object categories and attributes. However, state of the art models for categorical recognition, in general, require large amounts of training data, which is prohibitive for gaze data. To address this challenge, we propose a novel Gaze Pooling Layer that integrates gaze information into CNN-based architectures as an attention mechanism &#8211; incorporating both spatial and temporal aspects of human gaze behaviour. Third, we go one step further and investigate the feasibility of combining our gaze embedding approach, with the power of generative image models to visually decode, i.e. create a visual representation of, the search target. Forth, for the first time, we studied the effect of body shape on people preferences of outfits. We propose a novel and robust multi-photo approach to estimate the body shapes of each user and build a conditional model of clothing categories given body-shape. We demonstrate that in real-world data, clothing categories and body-shapes are correlated. We show that our approach estimates a realistic looking body shape that captures a user&#8217;s weight group and body shape type, even from a single image of a clothed person. However, an accurate depiction of the naked body is considered highly private and therefore, might not be consented by most people. First, we studied the perception of such technology via a user study. Then, in the last part of this thesis, we ask if the automatic extraction of such information can be effectively evaded. In summary, this thesis addresses several different tasks that aims to enable the vision system to analyse human search intent and preferences in real-world scenarios. In particular, the thesis proposes several novel ideas and models in visual search target prediction from human fixation data, for the first time studied the correlation between shape and clothing categories opening a new direction in clothing recommendation systems, and introduces a new topic in privacy and computer vision, aimed at preventing automatic 3D shape extraction from images. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27625
[18]
M. Simeonovski, “Accountable infrastructure and its impact on internet security and privacy,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
The Internet infrastructure relies on the correct functioning of the basic underlying protocols, which were designed for functionality. Security and privacy have been added post hoc, mostly by applying cryptographic means to different layers of communication. In the absence of accountability, as a fundamental property, the Internet infrastructure does not have a built-in ability to associate an action with the responsible entity, neither to detect or prevent misbehavior. In this thesis, we study accountability from a few different perspectives. First, we study the need of having accountability in anonymous communication networks as a mechanism that provides repudiation for the proxy nodes by tracing back selected outbound traffic in a provable manner. Second, we design a framework that provides a foundation to support the enforcement of the right to be forgotten law in a scalable and automated manner. The framework provides a technical mean for the users to prove their eligibility for content removal from the search results. Third, we analyze the Internet infrastructure determining potential security risks and threats imposed by dependencies among the entities on the Internet. Finally, we evaluate the feasibility of using hop count filtering as a mechanism for mitigating Distributed Reflective Denial-of-Service attacks, and conceptually show that it cannot work to prevent these attacks.
Export
BibTeX
@phdthesis{Simeonphd2019, TITLE = {Accountable infrastructure and its impact on internet security and privacy}, AUTHOR = {Simeonovski, Milivoj}, LANGUAGE = {eng}, DOI = {10.22028/D291-29890}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {The Internet infrastructure relies on the correct functioning of the basic underlying protocols, which were designed for functionality. Security and privacy have been added post hoc, mostly by applying cryptographic means to different layers of communication. In the absence of accountability, as a fundamental property, the Internet infrastructure does not have a built-in ability to associate an action with the responsible entity, neither to detect or prevent misbehavior. In this thesis, we study accountability from a few different perspectives. First, we study the need of having accountability in anonymous communication networks as a mechanism that provides repudiation for the proxy nodes by tracing back selected outbound traffic in a provable manner. Second, we design a framework that provides a foundation to support the enforcement of the right to be forgotten law in a scalable and automated manner. The framework provides a technical mean for the users to prove their eligibility for content removal from the search results. Third, we analyze the Internet infrastructure determining potential security risks and threats imposed by dependencies among the entities on the Internet. Finally, we evaluate the feasibility of using hop count filtering as a mechanism for mitigating Distributed Reflective Denial-of-Service attacks, and conceptually show that it cannot work to prevent these attacks.}, }
Endnote
%0 Thesis %A Simeonovski, Milivoj %Y Backes, Michael %A referee: Rossow, Christian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Accountable infrastructure and its impact on internet security and privacy : %G eng %U http://hdl.handle.net/21.11116/0000-0005-4392-A %R 10.22028/D291-29890 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 143 p. %V phd %9 phd %X The Internet infrastructure relies on the correct functioning of the basic underlying protocols, which were designed for functionality. Security and privacy have been added post hoc, mostly by applying cryptographic means to different layers of communication. In the absence of accountability, as a fundamental property, the Internet infrastructure does not have a built-in ability to associate an action with the responsible entity, neither to detect or prevent misbehavior. In this thesis, we study accountability from a few different perspectives. First, we study the need of having accountability in anonymous communication networks as a mechanism that provides repudiation for the proxy nodes by tracing back selected outbound traffic in a provable manner. Second, we design a framework that provides a foundation to support the enforcement of the right to be forgotten law in a scalable and automated manner. The framework provides a technical mean for the users to prove their eligibility for content removal from the search results. Third, we analyze the Internet infrastructure determining potential security risks and threats imposed by dependencies among the entities on the Internet. Finally, we evaluate the feasibility of using hop count filtering as a mechanism for mitigating Distributed Reflective Denial-of-Service attacks, and conceptually show that it cannot work to prevent these attacks. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28321
[19]
J. Steil, “Mobile Eye Tracking for Everyone,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
Eye tracking and gaze-based human-computer interfaces have become a practical modality in desktop settings, since remote eye tracking is efficient and affordable. However, remote eye tracking remains constrained to indoor, laboratory-like conditions, in which lighting and user position need to be controlled. Mobile eye tracking has the potential to overcome these limitations and to allow people to move around freely and to use eye tracking on a daily basis during their everyday routine. However, mobile eye tracking currently faces two fundamental challenges that prevent it from being practically usable and that, consequently, have to be addressed before mobile eye tracking can truly be used by everyone: Mobile eye tracking needs to be advanced and made fully functional in unconstrained environments, and it needs to be made socially acceptable. Numerous sensing and analysis methods were initially developed for remote eye tracking and have been successfully applied for decades. Unfortunately, these methods are limited in terms of functionality and correctness, or even unsuitable for application in mobile eye tracking. Therefore, the majority of fundamental definitions, eye tracking methods, and gaze estimation approaches cannot be borrowed from remote eye tracking without adaptation. For example, the definitions of specific eye movements, like classical fixations, need to be extended to mobile settings where natural user and head motion are omnipresent. Corresponding analytical methods need to be adjusted or completely reimplemented based on novel approaches encoding the human gaze behaviour. Apart from these technical challenges, an entirely new, and yet under-explored, topic required for the breakthrough of mobile eye tracking as everyday technology is the overcoming of social obstacles. A first crucial key issue to defuse social objections is the building of acceptance towards mobile eye tracking. Hence, it is essential to replace the bulky appearance of current head-mounted eye trackers with an unobtrusive, appealing, and trendy design. The second high-priority theme of increasing importance for everyone is privacy and its protection, given that research and industry have not focused on or taken care of this problem at all. To establish true confidence, future devices have to find a fine balance between protecting users’ and bystanders’ privacy and attracting and convincing users of their necessity, utility, and potential with useful and beneficial features. The solution of technical challenges and social obstacles is the prerequisite for the development of a variety of novel and exciting applications in order to establish mobile eye tracking as a new paradigm, which ease our everyday life. This thesis addresses core technical challenges of mobile eye tracking that currently prevent it from being widely adopted. Specifically, this thesis proves that 3D data used for the calibration of mobile eye trackers improves gaze estimation and significantly reduces the parallax error. Further, it presents the first effective fixation detection method for head-mounted devices that is robust against the prevalence of user and gaze target motion. In order to achieve social acceptability, this thesis proposes an innovative and unobtrusive design for future mobile eye tracking devices and builds the first prototype with fully frame-embedded eye cameras combined with a calibration-free deep-trained appearance-based gaze estimation approach. To protect users’ and bystanders’ privacy in the presence of head-mounted eye trackers, this thesis presents another first-of-its-kind prototype. It is able to identify privacy-sensitive situations to automatically enable and disable the eye tracker’s first-person camera by means of a mechanical shutter, leveraging the combination of deep scene and eye movement features. Nevertheless, solving technical challenges and social obstacles alone is not sufficient to make mobile eye tracking attractive for the masses. The key to success is the development of convincingly useful, innovative, and essential applications. To extend the protection of users’ privacy on the software side as well, this thesis presents the first privacy-aware VR gaze interface using differential privacy. This method adds noise to recorded eye tracking data so that privacy-sensitive information like a user’s gender or identity is protected without impeding the utility of the data itself. In addition, the first large-scale online survey is conducted to understand users’ concerns with eye tracking. To develop and evaluate novel applications, this thesis presents the first publicly available long-term eye tracking datasets. They are used to show the unsupervised detection of users’ activities from eye movements alone using novel and efficient video-based encoding approaches as well as to propose the first proof-of-concept method to forecast users’ attentive behaviour during everyday mobile interactions from phone-integrated and body-worn sensors. This opens up possibilities for the development of a variety of novel and exciting applications. With more advanced features, accompanied by technological progress and sensor miniaturisation, eye tracking is increasingly integrated into conventional glasses as well as virtual and augmented reality (VR/AR) head-mounted displays, becoming an integral component of mobile interfaces. This thesis paves the way for the development of socially acceptable, privacy-aware, but highly functional mobile eye tracking devices and novel applications, so that mobile eye tracking can develop its full potential to become an everyday technology for everyone.
Export
BibTeX
@phdthesis{Steilphd2019, TITLE = {Mobile Eye Tracking for Everyone}, AUTHOR = {Steil, Julian}, LANGUAGE = {eng}, DOI = {10.22028/D291-30004}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {Eye tracking and gaze-based human-computer interfaces have become a practical modality in desktop settings, since remote eye tracking is efficient and affordable. However, remote eye tracking remains constrained to indoor, laboratory-like conditions, in which lighting and user position need to be controlled. Mobile eye tracking has the potential to overcome these limitations and to allow people to move around freely and to use eye tracking on a daily basis during their everyday routine. However, mobile eye tracking currently faces two fundamental challenges that prevent it from being practically usable and that, consequently, have to be addressed before mobile eye tracking can truly be used by everyone: Mobile eye tracking needs to be advanced and made fully functional in unconstrained environments, and it needs to be made socially acceptable. Numerous sensing and analysis methods were initially developed for remote eye tracking and have been successfully applied for decades. Unfortunately, these methods are limited in terms of functionality and correctness, or even unsuitable for application in mobile eye tracking. Therefore, the majority of fundamental definitions, eye tracking methods, and gaze estimation approaches cannot be borrowed from remote eye tracking without adaptation. For example, the definitions of specific eye movements, like classical fixations, need to be extended to mobile settings where natural user and head motion are omnipresent. Corresponding analytical methods need to be adjusted or completely reimplemented based on novel approaches encoding the human gaze behaviour. Apart from these technical challenges, an entirely new, and yet under-explored, topic required for the breakthrough of mobile eye tracking as everyday technology is the overcoming of social obstacles. A first crucial key issue to defuse social objections is the building of acceptance towards mobile eye tracking. Hence, it is essential to replace the bulky appearance of current head-mounted eye trackers with an unobtrusive, appealing, and trendy design. The second high-priority theme of increasing importance for everyone is privacy and its protection, given that research and industry have not focused on or taken care of this problem at all. To establish true confidence, future devices have to find a fine balance between protecting users{\textquoteright} and bystanders{\textquoteright} privacy and attracting and convincing users of their necessity, utility, and potential with useful and beneficial features. The solution of technical challenges and social obstacles is the prerequisite for the development of a variety of novel and exciting applications in order to establish mobile eye tracking as a new paradigm, which ease our everyday life. This thesis addresses core technical challenges of mobile eye tracking that currently prevent it from being widely adopted. Specifically, this thesis proves that 3D data used for the calibration of mobile eye trackers improves gaze estimation and significantly reduces the parallax error. Further, it presents the first effective fixation detection method for head-mounted devices that is robust against the prevalence of user and gaze target motion. In order to achieve social acceptability, this thesis proposes an innovative and unobtrusive design for future mobile eye tracking devices and builds the first prototype with fully frame-embedded eye cameras combined with a calibration-free deep-trained appearance-based gaze estimation approach. To protect users{\textquoteright} and bystanders{\textquoteright} privacy in the presence of head-mounted eye trackers, this thesis presents another first-of-its-kind prototype. It is able to identify privacy-sensitive situations to automatically enable and disable the eye tracker{\textquoteright}s first-person camera by means of a mechanical shutter, leveraging the combination of deep scene and eye movement features. Nevertheless, solving technical challenges and social obstacles alone is not sufficient to make mobile eye tracking attractive for the masses. The key to success is the development of convincingly useful, innovative, and essential applications. To extend the protection of users{\textquoteright} privacy on the software side as well, this thesis presents the first privacy-aware VR gaze interface using differential privacy. This method adds noise to recorded eye tracking data so that privacy-sensitive information like a user{\textquoteright}s gender or identity is protected without impeding the utility of the data itself. In addition, the first large-scale online survey is conducted to understand users{\textquoteright} concerns with eye tracking. To develop and evaluate novel applications, this thesis presents the first publicly available long-term eye tracking datasets. They are used to show the unsupervised detection of users{\textquoteright} activities from eye movements alone using novel and efficient video-based encoding approaches as well as to propose the first proof-of-concept method to forecast users{\textquoteright} attentive behaviour during everyday mobile interactions from phone-integrated and body-worn sensors. This opens up possibilities for the development of a variety of novel and exciting applications. With more advanced features, accompanied by technological progress and sensor miniaturisation, eye tracking is increasingly integrated into conventional glasses as well as virtual and augmented reality (VR/AR) head-mounted displays, becoming an integral component of mobile interfaces. This thesis paves the way for the development of socially acceptable, privacy-aware, but highly functional mobile eye tracking devices and novel applications, so that mobile eye tracking can develop its full potential to become an everyday technology for everyone.}, }
Endnote
%0 Thesis %A Steil, Julian %Y Bulling, Andreas %A referee: Kr&#252;ger, Antonio %A referee: Kasneci, Enkelejda %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Mobile Eye Tracking for Everyone : %G eng %U http://hdl.handle.net/21.11116/0000-0005-652F-6 %R 10.22028/D291-30004 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 272 p. %V phd %9 phd %X Eye tracking and gaze-based human-computer interfaces have become a practical modality in desktop settings, since remote eye tracking is efficient and affordable. However, remote eye tracking remains constrained to indoor, laboratory-like conditions, in which lighting and user position need to be controlled. Mobile eye tracking has the potential to overcome these limitations and to allow people to move around freely and to use eye tracking on a daily basis during their everyday routine. However, mobile eye tracking currently faces two fundamental challenges that prevent it from being practically usable and that, consequently, have to be addressed before mobile eye tracking can truly be used by everyone: Mobile eye tracking needs to be advanced and made fully functional in unconstrained environments, and it needs to be made socially acceptable. Numerous sensing and analysis methods were initially developed for remote eye tracking and have been successfully applied for decades. Unfortunately, these methods are limited in terms of functionality and correctness, or even unsuitable for application in mobile eye tracking. Therefore, the majority of fundamental definitions, eye tracking methods, and gaze estimation approaches cannot be borrowed from remote eye tracking without adaptation. For example, the definitions of specific eye movements, like classical fixations, need to be extended to mobile settings where natural user and head motion are omnipresent. Corresponding analytical methods need to be adjusted or completely reimplemented based on novel approaches encoding the human gaze behaviour. Apart from these technical challenges, an entirely new, and yet under-explored, topic required for the breakthrough of mobile eye tracking as everyday technology is the overcoming of social obstacles. A first crucial key issue to defuse social objections is the building of acceptance towards mobile eye tracking. Hence, it is essential to replace the bulky appearance of current head-mounted eye trackers with an unobtrusive, appealing, and trendy design. The second high-priority theme of increasing importance for everyone is privacy and its protection, given that research and industry have not focused on or taken care of this problem at all. To establish true confidence, future devices have to find a fine balance between protecting users&#8217; and bystanders&#8217; privacy and attracting and convincing users of their necessity, utility, and potential with useful and beneficial features. The solution of technical challenges and social obstacles is the prerequisite for the development of a variety of novel and exciting applications in order to establish mobile eye tracking as a new paradigm, which ease our everyday life. This thesis addresses core technical challenges of mobile eye tracking that currently prevent it from being widely adopted. Specifically, this thesis proves that 3D data used for the calibration of mobile eye trackers improves gaze estimation and significantly reduces the parallax error. Further, it presents the first effective fixation detection method for head-mounted devices that is robust against the prevalence of user and gaze target motion. In order to achieve social acceptability, this thesis proposes an innovative and unobtrusive design for future mobile eye tracking devices and builds the first prototype with fully frame-embedded eye cameras combined with a calibration-free deep-trained appearance-based gaze estimation approach. To protect users&#8217; and bystanders&#8217; privacy in the presence of head-mounted eye trackers, this thesis presents another first-of-its-kind prototype. It is able to identify privacy-sensitive situations to automatically enable and disable the eye tracker&#8217;s first-person camera by means of a mechanical shutter, leveraging the combination of deep scene and eye movement features. Nevertheless, solving technical challenges and social obstacles alone is not sufficient to make mobile eye tracking attractive for the masses. The key to success is the development of convincingly useful, innovative, and essential applications. To extend the protection of users&#8217; privacy on the software side as well, this thesis presents the first privacy-aware VR gaze interface using differential privacy. This method adds noise to recorded eye tracking data so that privacy-sensitive information like a user&#8217;s gender or identity is protected without impeding the utility of the data itself. In addition, the first large-scale online survey is conducted to understand users&#8217; concerns with eye tracking. To develop and evaluate novel applications, this thesis presents the first publicly available long-term eye tracking datasets. They are used to show the unsupervised detection of users&#8217; activities from eye movements alone using novel and efficient video-based encoding approaches as well as to propose the first proof-of-concept method to forecast users&#8217; attentive behaviour during everyday mobile interactions from phone-integrated and body-worn sensors. This opens up possibilities for the development of a variety of novel and exciting applications. With more advanced features, accompanied by technological progress and sensor miniaturisation, eye tracking is increasingly integrated into conventional glasses as well as virtual and augmented reality (VR/AR) head-mounted displays, becoming an integral component of mobile interfaces. This thesis paves the way for the development of socially acceptable, privacy-aware, but highly functional mobile eye tracking devices and novel applications, so that mobile eye tracking can develop its full potential to become an everyday technology for everyone. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28498
[20]
M. Voigt, “Decidable fragments of first-order logic and of first-order linear arithmetic with uninterpreted predicates,” Universität des Saarlandes, Saarbrücken, 2019.
Abstract
First-order logic is one of the most prominent formalisms in computer science and mathematics. Since there is no algorithm capable of solving its satisfiability problem, first-order logic is said to be undecidable. The classical decision problem is the quest for a delineation between the decidable and the undecidable parts. The results presented in this thesis shed more light on the boundary and open new perspectives on the landscape of known decidable fragments. In the first part we focus on the new concept of separateness of variables and explore its applicability to the classical decision problem and beyond. Two disjoint sets of first-order variables are separated in a given formula if none of its atoms contains variables from both sets. This notion facilitates the definition of decidable extensions of many well-known decidable first-order fragments. We demonstrate this for several prefix fragments, several guarded fragments, the two-variable fragment, and for the fluted fragment. Although the extensions exhibit the same expressive power as the respective originals, certain logical properties can be expressed much more succinctly. In two cases the succinctness gap cannot be bounded using elementary functions. This fact already hints at computationally hard satisfiability problems. Indeed, we derive non-elementary lower bounds for the separated fragment, an extension of the Bernays-Schönfinkel-Ramsey fragment (E*A*-prefix sentences). On the semantic level, separateness of quantified variables may lead to weaker dependences than we encounter in general. We investigate this property in the context of model-checking games. The focus of the second part of the thesis is on linear arithmetic with uninterpreted predicates. Two novel decidable fragments are presented, both based on the Bernays-Schönfinkel-Ramsey fragment. On the negative side, we identify several small fragments of the language for which satisfiability is undecidable.
Export
BibTeX
@phdthesis{voigtphd2019, TITLE = {Decidable fragments of first-order logic and of first-order linear arithmetic with uninterpreted predicates}, AUTHOR = {Voigt, Marco}, LANGUAGE = {eng}, DOI = {10.22028/D291-28428}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, ABSTRACT = {First-order logic is one of the most prominent formalisms in computer science and mathematics. Since there is no algorithm capable of solving its satisfiability problem, first-order logic is said to be undecidable. The classical decision problem is the quest for a delineation between the decidable and the undecidable parts. The results presented in this thesis shed more light on the boundary and open new perspectives on the landscape of known decidable fragments. In the first part we focus on the new concept of separateness of variables and explore its applicability to the classical decision problem and beyond. Two disjoint sets of first-order variables are separated in a given formula if none of its atoms contains variables from both sets. This notion facilitates the definition of decidable extensions of many well-known decidable first-order fragments. We demonstrate this for several prefix fragments, several guarded fragments, the two-variable fragment, and for the fluted fragment. Although the extensions exhibit the same expressive power as the respective originals, certain logical properties can be expressed much more succinctly. In two cases the succinctness gap cannot be bounded using elementary functions. This fact already hints at computationally hard satisfiability problems. Indeed, we derive non-elementary lower bounds for the separated fragment, an extension of the Bernays-Sch{\"o}nfinkel-Ramsey fragment (E*A*-prefix sentences). On the semantic level, separateness of quantified variables may lead to weaker dependences than we encounter in general. We investigate this property in the context of model-checking games. The focus of the second part of the thesis is on linear arithmetic with uninterpreted predicates. Two novel decidable fragments are presented, both based on the Bernays-Sch{\"o}nfinkel-Ramsey fragment. On the negative side, we identify several small fragments of the language for which satisfiability is undecidable.}, }
Endnote
%0 Thesis %A Voigt, Marco %Y Weidenbach, Christoph %A referee: Gr&#228;del, Erich %A referee: Leitsch, Alexander %A referee: Sturm, Thomas %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society %T Decidable fragments of first-order logic and of first-order linear arithmetic with uninterpreted predicates : %G eng %U http://hdl.handle.net/21.11116/0000-0005-4373-E %R 10.22028/D291-28428 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 333 p. %V phd %9 phd %X First-order logic is one of the most prominent formalisms in computer science and mathematics. Since there is no algorithm capable of solving its satisfiability problem, first-order logic is said to be undecidable. The classical decision problem is the quest for a delineation between the decidable and the undecidable parts. The results presented in this thesis shed more light on the boundary and open new perspectives on the landscape of known decidable fragments. In the first part we focus on the new concept of separateness of variables and explore its applicability to the classical decision problem and beyond. Two disjoint sets of first-order variables are separated in a given formula if none of its atoms contains variables from both sets. This notion facilitates the definition of decidable extensions of many well-known decidable first-order fragments. We demonstrate this for several prefix fragments, several guarded fragments, the two-variable fragment, and for the fluted fragment. Although the extensions exhibit the same expressive power as the respective originals, certain logical properties can be expressed much more succinctly. In two cases the succinctness gap cannot be bounded using elementary functions. This fact already hints at computationally hard satisfiability problems. Indeed, we derive non-elementary lower bounds for the separated fragment, an extension of the Bernays-Sch&#246;nfinkel-Ramsey fragment (E*A*-prefix sentences). On the semantic level, separateness of quantified variables may lead to weaker dependences than we encounter in general. We investigate this property in the context of model-checking games. The focus of the second part of the thesis is on linear arithmetic with uninterpreted predicates. Two novel decidable fragments are presented, both based on the Bernays-Sch&#246;nfinkel-Ramsey fragment. On the negative side, we identify several small fragments of the language for which satisfiability is undecidable. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27767
2018
[21]
T. Bastys, “Analysis of the protein-Ligand and protein-peptide interactions using a combined sequence- and structure-based approach,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Proteins participate in most of the important processes in cells, and their ability to perform their function ultimately depends on their three-dimensional structure. They usually act in these processes through interactions with other molecules. Because of the importance of their role, proteins are also the common target for small molecule drugs that inhibit their activity, which may include targeting protein interactions. Understanding protein interactions and how they are affected by mutations is thus crucial for combating drug resistance and aiding drug design. This dissertation combines bioinformatics studies of protein interactions at both primary sequence and structural level. We analyse protein-protein interactions through linear motifs, as well as protein-small molecule interactions, and study how mutations affect them. This is done in the context of two systems. In the first study of drug resistance mutations in the protease of the human immunodeficiency virus type 1, we successfully apply molecular dynamics simulations to estimate the effects of known resistance-associated mutations on the free binding energy, also revealing molecular mechanisms of resistance. In the second study, we analyse consensus profiles of linear motifs that mediate the recognition by the mitogen-activated protein kinases of their target proteins. We thus gain insights into the cellular processes these proteins are involved in.
Export
BibTeX
@phdthesis{Bastysphd2013, TITLE = {Analysis of the protein-Ligand and protein-peptide interactions using a combined sequence- and structure-based approach}, AUTHOR = {Bastys, Tomas}, LANGUAGE = {eng}, DOI = {10.22028/D291-27920}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Proteins participate in most of the important processes in cells, and their ability to perform their function ultimately depends on their three-dimensional structure. They usually act in these processes through interactions with other molecules. Because of the importance of their role, proteins are also the common target for small molecule drugs that inhibit their activity, which may include targeting protein interactions. Understanding protein interactions and how they are affected by mutations is thus crucial for combating drug resistance and aiding drug design. This dissertation combines bioinformatics studies of protein interactions at both primary sequence and structural level. We analyse protein-protein interactions through linear motifs, as well as protein-small molecule interactions, and study how mutations affect them. This is done in the context of two systems. In the first study of drug resistance mutations in the protease of the human immunodeficiency virus type 1, we successfully apply molecular dynamics simulations to estimate the effects of known resistance-associated mutations on the free binding energy, also revealing molecular mechanisms of resistance. In the second study, we analyse consensus profiles of linear motifs that mediate the recognition by the mitogen-activated protein kinases of their target proteins. We thus gain insights into the cellular processes these proteins are involved in.}, }
Endnote
%0 Thesis %A Bastys, Tomas %Y Kalinina, Olga V. %A referee: Helms, Volkhard %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Analysis of the protein-Ligand and protein-peptide interactions using a combined sequence- and structure-based approach : %G eng %U http://hdl.handle.net/21.11116/0000-0003-CE47-6 %R 10.22028/D291-27920 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P 134 p. %V phd %9 phd %X Proteins participate in most of the important processes in cells, and their ability to perform their function ultimately depends on their three-dimensional structure. They usually act in these processes through interactions with other molecules. Because of the importance of their role, proteins are also the common target for small molecule drugs that inhibit their activity, which may include targeting protein interactions. Understanding protein interactions and how they are affected by mutations is thus crucial for combating drug resistance and aiding drug design. This dissertation combines bioinformatics studies of protein interactions at both primary sequence and structural level. We analyse protein-protein interactions through linear motifs, as well as protein-small molecule interactions, and study how mutations affect them. This is done in the context of two systems. In the first study of drug resistance mutations in the protease of the human immunodeficiency virus type 1, we successfully apply molecular dynamics simulations to estimate the effects of known resistance-associated mutations on the free binding energy, also revealing molecular mechanisms of resistance. In the second study, we analyse consensus profiles of linear motifs that mediate the recognition by the mitogen-activated protein kinases of their target proteins. We thus gain insights into the cellular processes these proteins are involved in. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27455
[22]
A. Bichhawat, “Practical Dynamic Information Flow Control,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Over the years, computer systems and applications have grown significantly complex while handling a plethora of private and sensitive user information. The complexity of these applications is often assisted by a set of (un)intentional bugs with both malicious and non-malicious intent leading to information leaks. Information flow control has been studied extensively as an approach to mitigate such information leaks. The technique works by enforcing the security property of non-interference using a specified set of security policies. A vast majority of existing work in this area is based on static analyses. However, some of the applications, especially on the Web, are developed using dynamic languages like JavaScript that make the static analyses techniques stale and ineffective. As a result, there has been a growing interest in recent years to develop dynamic information flow analysis techniques. In spite of the advances in the field, dynamic information flow analysis has not been at the helm of information flow security in dynamic settings like the Web; the prime reason being that the analysis techniques and the security property related to them (non-interference) either over-approximate or are too restrictive in most cases. Concretely, the analysis techniques gen- erate a lot of false positives, do not allow legitimate release of sensitive information, support only static and rigid security policies or are not general enough to be applied to real-world applications. This thesis focuses on improving the usability of dynamic information flow techniques by presenting mechanisms that can enhance the precision and permissiveness of the analyses. It begins by presenting a sound improvement and enhancement of the permissive-upgrade strategy, a strategy widely used to enforce dynamic information flow control, which improves the strategy’s permissiveness and makes it generic in applicability. The thesis, then, presents a sound and precise control scope analysis for handling complex features like unstructured control flow and exceptions in higher-order languages. Although non-interference is a desired property for enforcing information flow control, there are program instances that require legitimate release of some parts of the secret data to provide the required functionality. Towards this end, this thesis develops a sound approach to bound information leaks dynamically while allowing information release in accordance to a pre-specified budget. The thesis concludes by applying these techniques to an information flow control-enabled Web browser and explores a policy specification mechanism that allows flexible and useful information flow policies to be specified for Web applications.
Export
BibTeX
@phdthesis{bichhawatphd2017, TITLE = {Practical Dynamic Information Flow Control}, AUTHOR = {Bichhawat, Abhishek}, LANGUAGE = {eng}, DOI = {10.22028/D291-27244}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Over the years, computer systems and applications have grown significantly complex while handling a plethora of private and sensitive user information. The complexity of these applications is often assisted by a set of (un)intentional bugs with both malicious and non-malicious intent leading to information leaks. Information flow control has been studied extensively as an approach to mitigate such information leaks. The technique works by enforcing the security property of non-interference using a specified set of security policies. A vast majority of existing work in this area is based on static analyses. However, some of the applications, especially on the Web, are developed using dynamic languages like JavaScript that make the static analyses techniques stale and ineffective. As a result, there has been a growing interest in recent years to develop dynamic information flow analysis techniques. In spite of the advances in the field, dynamic information flow analysis has not been at the helm of information flow security in dynamic settings like the Web; the prime reason being that the analysis techniques and the security property related to them (non-interference) either over-approximate or are too restrictive in most cases. Concretely, the analysis techniques gen- erate a lot of false positives, do not allow legitimate release of sensitive information, support only static and rigid security policies or are not general enough to be applied to real-world applications. This thesis focuses on improving the usability of dynamic information flow techniques by presenting mechanisms that can enhance the precision and permissiveness of the analyses. It begins by presenting a sound improvement and enhancement of the permissive-upgrade strategy, a strategy widely used to enforce dynamic information flow control, which improves the strategy{\textquoteright}s permissiveness and makes it generic in applicability. The thesis, then, presents a sound and precise control scope analysis for handling complex features like unstructured control flow and exceptions in higher-order languages. Although non-interference is a desired property for enforcing information flow control, there are program instances that require legitimate release of some parts of the secret data to provide the required functionality. Towards this end, this thesis develops a sound approach to bound information leaks dynamically while allowing information release in accordance to a pre-specified budget. The thesis concludes by applying these techniques to an information flow control-enabled Web browser and explores a policy specification mechanism that allows flexible and useful information flow policies to be specified for Web applications.}, }
Endnote
%0 Thesis %A Bichhawat, Abhishek %Y Hammer, Christian %A referee: Garg, Deepak %A referee: Finkbeiner, Bernd %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Practical Dynamic Information Flow Control : %G eng %U http://hdl.handle.net/21.11116/0000-0001-9D08-6 %R 10.22028/D291-27244 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P 132 p. %V phd %9 phd %X Over the years, computer systems and applications have grown significantly complex while handling a plethora of private and sensitive user information. The complexity of these applications is often assisted by a set of (un)intentional bugs with both malicious and non-malicious intent leading to information leaks. Information flow control has been studied extensively as an approach to mitigate such information leaks. The technique works by enforcing the security property of non-interference using a specified set of security policies. A vast majority of existing work in this area is based on static analyses. However, some of the applications, especially on the Web, are developed using dynamic languages like JavaScript that make the static analyses techniques stale and ineffective. As a result, there has been a growing interest in recent years to develop dynamic information flow analysis techniques. In spite of the advances in the field, dynamic information flow analysis has not been at the helm of information flow security in dynamic settings like the Web; the prime reason being that the analysis techniques and the security property related to them (non-interference) either over-approximate or are too restrictive in most cases. Concretely, the analysis techniques gen- erate a lot of false positives, do not allow legitimate release of sensitive information, support only static and rigid security policies or are not general enough to be applied to real-world applications. This thesis focuses on improving the usability of dynamic information flow techniques by presenting mechanisms that can enhance the precision and permissiveness of the analyses. It begins by presenting a sound improvement and enhancement of the permissive-upgrade strategy, a strategy widely used to enforce dynamic information flow control, which improves the strategy&#8217;s permissiveness and makes it generic in applicability. The thesis, then, presents a sound and precise control scope analysis for handling complex features like unstructured control flow and exceptions in higher-order languages. Although non-interference is a desired property for enforcing information flow control, there are program instances that require legitimate release of some parts of the secret data to provide the required functionality. Towards this end, this thesis develops a sound approach to bound information leaks dynamically while allowing information release in accordance to a pre-specified budget. The thesis concludes by applying these techniques to an information flow control-enabled Web browser and explores a policy specification mechanism that allows flexible and useful information flow policies to be specified for Web applications. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27103
[23]
J. Doerfert, “Applicable and sound polyhedral optimization of low-level programs,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Computers become increasingly complex. Current and future systems feature configurable hardware, multiple cores with different capabilities, as well as accelerators. In addition, the memory subsystem becomes diversified too. The cache hierarchy grows deeper, is augmented with scratchpads, low-latency memory, and high-bandwidth memory. The programmer alone cannot utilize this enormous potential. Compilers have to provide insight into the program behavior, or even arrange computations and data themselves. Either way, they need a more holistic view of the program. Local transformations, which treat the iteration order, computation unit, and data layout as fixed, will not be able to fully utilize a diverse system. The polyhedral model, a high-level program representation and transformation framework, has shown great success tackling various problems in the context of diverse systems. While it is widely acknowledged for its analytical powers and transformation capabilities, it is also widely assumed to be too restrictive and fragile for real-world programs. In this thesis we improve the applicability and profitability of polyhedral-model-based techniques. Our efforts guarantee a sound polyhedral representation and extend the applicability to a wider range of programs. In addition, we introduce new applications to utilize the information available in the polyhedral program representation, including standalone optimizations and techniques to derive high-level properties.
Export
BibTeX
@phdthesis{doerfertphd2019, TITLE = {Applicable and sound polyhedral optimization of low-level programs}, AUTHOR = {Doerfert, Johannes}, LANGUAGE = {eng}, DOI = {10.22028/D291-29814}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Computers become increasingly complex. Current and future systems feature configurable hardware, multiple cores with different capabilities, as well as accelerators. In addition, the memory subsystem becomes diversified too. The cache hierarchy grows deeper, is augmented with scratchpads, low-latency memory, and high-bandwidth memory. The programmer alone cannot utilize this enormous potential. Compilers have to provide insight into the program behavior, or even arrange computations and data themselves. Either way, they need a more holistic view of the program. Local transformations, which treat the iteration order, computation unit, and data layout as fixed, will not be able to fully utilize a diverse system. The polyhedral model, a high-level program representation and transformation framework, has shown great success tackling various problems in the context of diverse systems. While it is widely acknowledged for its analytical powers and transformation capabilities, it is also widely assumed to be too restrictive and fragile for real-world programs. In this thesis we improve the applicability and profitability of polyhedral-model-based techniques. Our efforts guarantee a sound polyhedral representation and extend the applicability to a wider range of programs. In addition, we introduce new applications to utilize the information available in the polyhedral program representation, including standalone optimizations and techniques to derive high-level properties.}, }
Endnote
%0 Thesis %A Doerfert, Johannes %Y Hack, Sebastian %A referee: Reineke, Jan %A referee: Rastello, Fabrice %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Applicable and sound polyhedral optimization of low-level programs : %G eng %U http://hdl.handle.net/21.11116/0000-0005-4389-5 %R 10.22028/D291-29814 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P 245 p. %V phd %9 phd %X Computers become increasingly complex. Current and future systems feature configurable hardware, multiple cores with different capabilities, as well as accelerators. In addition, the memory subsystem becomes diversified too. The cache hierarchy grows deeper, is augmented with scratchpads, low-latency memory, and high-bandwidth memory. The programmer alone cannot utilize this enormous potential. Compilers have to provide insight into the program behavior, or even arrange computations and data themselves. Either way, they need a more holistic view of the program. Local transformations, which treat the iteration order, computation unit, and data layout as fixed, will not be able to fully utilize a diverse system. The polyhedral model, a high-level program representation and transformation framework, has shown great success tackling various problems in the context of diverse systems. While it is widely acknowledged for its analytical powers and transformation capabilities, it is also widely assumed to be too restrictive and fragile for real-world programs. In this thesis we improve the applicability and profitability of polyhedral-model-based techniques. Our efforts guarantee a sound polyhedral representation and extend the applicability to a wider range of programs. In addition, we introduce new applications to utilize the information available in the polyhedral program representation, including standalone optimizations and techniques to derive high-level properties. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/28318
[24]
P. Ernst, “Biomedical Knowledge Base Construction from Text and its Applications in Knowledge-based Systems,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
While general-purpose Knowledge Bases (KBs) have gone a long way in compiling comprehensive knowledgee about people, events, places, etc., domain-specific KBs, such as on health, are equally important, but are less explored. Consequently, a comprehensive and expressive health KB that spans all aspects of biomedical knowledge is still missing. The main goal of this thesis is to develop principled methods for building such a KB and enabling knowledge-centric applications. We address several challenges and make the following contributions: - To construct a health KB, we devise a largely automated and scalable pattern-based knowledge extraction method covering a spectrum of different text genres and distilling a wide variety of facts from different biomedical areas. - To consider higher-arity relations, crucial for proper knowledge representation in advanced domain such as health, we generalize the fact-pattern duality paradigm of previous methods. A key novelty is the integration of facts with missing arguments by extending our framework to partial patterns and facts by reasoning over the composability of partial facts. - To demonstrate the benefits of a health KB, we devise systems for entity-aware search and analytics and for entity-relationship-oriented exploration. Extensive experiments and use-case studies demonstrate the viability of the proposed approaches.
Export
BibTeX
@phdthesis{Ernstphd2017, TITLE = {Biomedical Knowledge Base Construction from Text and its Applications in Knowledge-based Systems}, AUTHOR = {Ernst, Patrick}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-271051}, DOI = {10.22028/D291-27105}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {While general-purpose Knowledge Bases (KBs) have gone a long way in compiling comprehensive knowledgee about people, events, places, etc., domain-specific KBs, such as on health, are equally important, but are less explored. Consequently, a comprehensive and expressive health KB that spans all aspects of biomedical knowledge is still missing. The main goal of this thesis is to develop principled methods for building such a KB and enabling knowledge-centric applications. We address several challenges and make the following contributions: -- To construct a health KB, we devise a largely automated and scalable pattern-based knowledge extraction method covering a spectrum of different text genres and distilling a wide variety of facts from different biomedical areas. -- To consider higher-arity relations, crucial for proper knowledge representation in advanced domain such as health, we generalize the fact-pattern duality paradigm of previous methods. A key novelty is the integration of facts with missing arguments by extending our framework to partial patterns and facts by reasoning over the composability of partial facts. -- To demonstrate the benefits of a health KB, we devise systems for entity-aware search and analytics and for entity-relationship-oriented exploration. Extensive experiments and use-case studies demonstrate the viability of the proposed approaches.}, }
Endnote
%0 Thesis %A Ernst, Patrick %Y Weikum, Gerhard %A referee: Verspoor, Karin %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Biomedical Knowledge Base Construction from Text and its Applications in Knowledge-based Systems : %G eng %U http://hdl.handle.net/21.11116/0000-0001-1864-4 %U urn:nbn:de:bsz:291-scidok-ds-271051 %R 10.22028/D291-27105 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %8 20.02.2018 %P 147 p. %V phd %9 phd %X While general-purpose Knowledge Bases (KBs) have gone a long way in compiling comprehensive knowledgee about people, events, places, etc., domain-specific KBs, such as on health, are equally important, but are less explored. Consequently, a comprehensive and expressive health KB that spans all aspects of biomedical knowledge is still missing. The main goal of this thesis is to develop principled methods for building such a KB and enabling knowledge-centric applications. We address several challenges and make the following contributions: - To construct a health KB, we devise a largely automated and scalable pattern-based knowledge extraction method covering a spectrum of different text genres and distilling a wide variety of facts from different biomedical areas. - To consider higher-arity relations, crucial for proper knowledge representation in advanced domain such as health, we generalize the fact-pattern duality paradigm of previous methods. A key novelty is the integration of facts with missing arguments by extending our framework to partial patterns and facts by reasoning over the composability of partial facts. - To demonstrate the benefits of a health KB, we devise systems for entity-aware search and analytics and for entity-relationship-oriented exploration. Extensive experiments and use-case studies demonstrate the viability of the proposed approaches. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26987
[25]
S. Garg, “Computational Haplotyping: Theory and Practice,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Genomics has paved a new way to comprehend life and its evolution, and also to investigate causes of diseases and their treatment. One of the important problems in genomic analyses is haplotype assembly. Constructing complete and accurate haplotypes plays an essential role in understanding population genetics and how species evolve. In this thesis, we focus on computational approaches to haplotype assembly from third generation sequencing technologies. This involves huge amounts of sequencing data, and such data contain errors due to the single molecule sequencing protocols employed. Taking advantage of combinatorial formulations helps to correct for these errors to solve the haplotyping problem. Various computational techniques such as dynamic programming, parameterized algorithms, and graph algorithms are used to solve this problem. This thesis presents several contributions concerning the area of haplotyping. First, a novel algorithm based on dynamic programming is proposed to provide approximation guarantees for phasing a single individual. Second, an integrative approach is introduced to combining multiple sequencing datasets to generating complete and accurate haplotypes. The effectiveness of this integrative approach is demonstrated on a real human genome. Third, we provide a novel efficient approach to phasing pedigrees and demonstrate its advantages in comparison to phasing a single individual. Fourth, we present a generalized graph-based framework for performing haplotype-aware de novo assembly. Specifically, this generalized framework consists of a hybrid pipeline for generating accurate and complete haplotypes from data stemming from multiple sequencing technologies, one that provides accurate reads and other that provides long reads.
Export
BibTeX
@phdthesis{gargphd2017, TITLE = {Computational Haplotyping: Theory and Practice}, AUTHOR = {Garg, Shilpa}, LANGUAGE = {eng}, DOI = {10.22028/D291-27252}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Genomics has paved a new way to comprehend life and its evolution, and also to investigate causes of diseases and their treatment. One of the important problems in genomic analyses is haplotype assembly. Constructing complete and accurate haplotypes plays an essential role in understanding population genetics and how species evolve. In this thesis, we focus on computational approaches to haplotype assembly from third generation sequencing technologies. This involves huge amounts of sequencing data, and such data contain errors due to the single molecule sequencing protocols employed. Taking advantage of combinatorial formulations helps to correct for these errors to solve the haplotyping problem. Various computational techniques such as dynamic programming, parameterized algorithms, and graph algorithms are used to solve this problem. This thesis presents several contributions concerning the area of haplotyping. First, a novel algorithm based on dynamic programming is proposed to provide approximation guarantees for phasing a single individual. Second, an integrative approach is introduced to combining multiple sequencing datasets to generating complete and accurate haplotypes. The effectiveness of this integrative approach is demonstrated on a real human genome. Third, we provide a novel efficient approach to phasing pedigrees and demonstrate its advantages in comparison to phasing a single individual. Fourth, we present a generalized graph-based framework for performing haplotype-aware de novo assembly. Specifically, this generalized framework consists of a hybrid pipeline for generating accurate and complete haplotypes from data stemming from multiple sequencing technologies, one that provides accurate reads and other that provides long reads.}, }
Endnote
%0 Thesis %A Garg, Shilpa %Y Marschall, Tobias %A referee: Helms, Volkhard %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Computational Haplotyping: Theory and Practice : %G eng %U http://hdl.handle.net/21.11116/0000-0001-9D80-D %R 10.22028/D291-27252 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P 119 p. %V phd %9 phd %X Genomics has paved a new way to comprehend life and its evolution, and also to investigate causes of diseases and their treatment. One of the important problems in genomic analyses is haplotype assembly. Constructing complete and accurate haplotypes plays an essential role in understanding population genetics and how species evolve. In this thesis, we focus on computational approaches to haplotype assembly from third generation sequencing technologies. This involves huge amounts of sequencing data, and such data contain errors due to the single molecule sequencing protocols employed. Taking advantage of combinatorial formulations helps to correct for these errors to solve the haplotyping problem. Various computational techniques such as dynamic programming, parameterized algorithms, and graph algorithms are used to solve this problem. This thesis presents several contributions concerning the area of haplotyping. First, a novel algorithm based on dynamic programming is proposed to provide approximation guarantees for phasing a single individual. Second, an integrative approach is introduced to combining multiple sequencing datasets to generating complete and accurate haplotypes. The effectiveness of this integrative approach is demonstrated on a real human genome. Third, we provide a novel efficient approach to phasing pedigrees and demonstrate its advantages in comparison to phasing a single individual. Fourth, we present a generalized graph-based framework for performing haplotype-aware de novo assembly. Specifically, this generalized framework consists of a hybrid pipeline for generating accurate and complete haplotypes from data stemming from multiple sequencing technologies, one that provides accurate reads and other that provides long reads. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27102
[26]
S. Heydrich, “A Tale of Two Packing Problems: Improved Algorithms and Tighter Bounds for Online Bin Packing and the Geometric Knapsack Problem,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Abstract In this thesis, we deal with two packing problems: the online bin packing and the geometric knapsack problem. In online bin packing, the aim is to pack a given number of items of dierent size into a minimal number of containers. The items need to be packed one by one without knowing future items. For online bin packing in one dimension, we present a new family of algorithms that constitutes the rst improvement over the previously best algorithm in almost 15 years. While the algorithmic ideas are intuitive, an elaborate analysis is required to prove its competitive ratio. We also give a lower bound for the competitive ratio of this family of algorithms. For online bin packing in higher dimensions, we discuss lower bounds for the competitive ratio and show that the ideas from the one-dimensional case cannot be easily transferred to obtain better two-dimensional algorithms. In the geometric knapsack problem, one aims to pack a maximum weight subset of given rectangles into one square container. For this problem, we consider oine approximation algorithms. For geometric knapsack with square items, we improve the running time of the best known PTAS and obtain an EPTAS . This shows that large running times caused by some standard techniques for geometric packing problems are not always necessary and can be improved. Finally, we show how to use resource augmentation to compute optimal solutions in EPTAS -time, thereby improving upon the known PTAS for this case.
Export
BibTeX
@phdthesis{Heydrphd18, TITLE = {A Tale of Two Packing Problems: Improved Algorithms and Tighter Bounds for Online Bin Packing and the Geometric Knapsack Problem}, AUTHOR = {Heydrich, Sandy}, LANGUAGE = {eng}, DOI = {10.22028/D291-27240}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Abstract In this thesis, we deal with two packing problems: the online bin packing and the geometric knapsack problem. In online bin packing, the aim is to pack a given number of items of dierent size into a minimal number of containers. The items need to be packed one by one without knowing future items. For online bin packing in one dimension, we present a new family of algorithms that constitutes the rst improvement over the previously best algorithm in almost 15 years. While the algorithmic ideas are intuitive, an elaborate analysis is required to prove its competitive ratio. We also give a lower bound for the competitive ratio of this family of algorithms. For online bin packing in higher dimensions, we discuss lower bounds for the competitive ratio and show that the ideas from the one-dimensional case cannot be easily transferred to obtain better two-dimensional algorithms. In the geometric knapsack problem, one aims to pack a maximum weight subset of given rectangles into one square container. For this problem, we consider oine approximation algorithms. For geometric knapsack with square items, we improve the running time of the best known PTAS and obtain an EPTAS . This shows that large running times caused by some standard techniques for geometric packing problems are not always necessary and can be improved. Finally, we show how to use resource augmentation to compute optimal solutions in EPTAS -time, thereby improving upon the known PTAS for this case.}, }
Endnote
%0 Thesis %A Heydrich, Sandy %Y van Stee, Rob %A referee: Mehlhorn, Kurt %A referee: Grandoni, Fabrizio %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Discrete Optimization, MPI for Informatics, Max Planck Society %T A Tale of Two Packing Problems: Improved Algorithms and Tighter Bounds for Online Bin Packing and the Geometric Knapsack Problem : %G eng %U http://hdl.handle.net/21.11116/0000-0001-E3DC-7 %R 10.22028/D291-27240 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P viii, 161 p. %V phd %9 phd %X Abstract In this thesis, we deal with two packing problems: the online bin packing and the geometric knapsack problem. In online bin packing, the aim is to pack a given number of items of dierent size into a minimal number of containers. The items need to be packed one by one without knowing future items. For online bin packing in one dimension, we present a new family of algorithms that constitutes the rst improvement over the previously best algorithm in almost 15 years. While the algorithmic ideas are intuitive, an elaborate analysis is required to prove its competitive ratio. We also give a lower bound for the competitive ratio of this family of algorithms. For online bin packing in higher dimensions, we discuss lower bounds for the competitive ratio and show that the ideas from the one-dimensional case cannot be easily transferred to obtain better two-dimensional algorithms. In the geometric knapsack problem, one aims to pack a maximum weight subset of given rectangles into one square container. For this problem, we consider oine approximation algorithms. For geometric knapsack with square items, we improve the running time of the best known PTAS and obtain an EPTAS . This shows that large running times caused by some standard techniques for geometric packing problems are not always necessary and can be improved. Finally, we show how to use resource augmentation to compute optimal solutions in EPTAS -time, thereby improving upon the known PTAS for this case. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27141
[27]
P. Kolev, “Algorithmic Results for Clustering and Refined Physarum Analysis,” Universität des Saarlandes, Saarbrücken, 2018.
Export
BibTeX
@phdthesis{Kolev_PhD2018, TITLE = {Algorithmic Results for Clustering and Refined Physarum Analysis}, AUTHOR = {Kolev, Pavel}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-275519}, DOI = {10.22028/D291-27551}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, }
Endnote
%0 Thesis %A Kolev, Pavel %Y Mehlhorn, Kurt %A referee: Bringmann, Karl %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Algorithmic Results for Clustering and Refined Physarum Analysis : %G eng %U http://hdl.handle.net/21.11116/0000-0003-3937-0 %R 10.22028/D291-27551 %U urn:nbn:de:bsz:291-scidok-ds-275519 %F OTHER: hdl:20.500.11880/27234 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P XIV, 123 p. %V phd %9 phd %U http://dx.doi.org/10.22028/D291-27551
[28]
W. Li, “From Perception over Anticipation to Manipulation,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
From autonomous driving cars to surgical robots, robotic system has enjoyed significant growth over the past decade. With the rapid development in robotics alongside the evolution in the related fields, such as computer vision and machine learning, integrating perception, anticipation and manipulation is key to the success of future robotic system. In this thesis, we explore different ways of such integration to extend the capabilities of a robotic system to take on more challenging real world tasks. On anticipation and perception, we address the recognition of ongoing activity from videos. In particular we focus on long-duration and complex activities and hence propose a new challenging dataset to facilitate the work. We introduce hierarchical labels over the activity classes and investigate the temporal accuracy-specificity trade-offs. We propose a new method based on recurrent neural networks that learns to predict over this hierarchy and realize accuracy specificity trade-offs. Our method outperforms several baselines on this new challenge. On manipulation with perception, we propose an efficient framework for programming a robot to use human tools. We first present a novel and compact model for using tools described by a tip model. Then we explore a strategy of utilizing a dual-gripper approach for manipulating tools – motivated by the absence of dexterous hands on widely available general purpose robots. Afterwards, we embed the tool use learning into a hierarchical architecture and evaluate it on a Baxter research robot. Finally, combining perception, anticipation and manipulation, we focus on a block stacking task. First we explore how to guide robot to place a single block into the scene without collapsing the existing structure. We introduce a mechanism to predict physical stability directly from visual input and evaluate it first on a synthetic data and then on real-world block stacking. Further, we introduce the target stacking task where the agent stacks blocks to reproduce a tower shown in an image. To do so, we create a synthetic block stacking environment with physics simulation in which the agent can learn block stacking end-to-end through trial and error, bypassing to explicitly model the corresponding physics knowledge. We propose a goal-parametrized GDQN model to plan with respect to the specific goal. We validate the model on both a navigation task in a classic gridworld environment and the block stacking task.
Export
BibTeX
@phdthesis{Wenbinphd2018, TITLE = {From Perception over Anticipation to Manipulation}, AUTHOR = {Li, Wenbin}, LANGUAGE = {eng}, DOI = {10.22028/D291-27156}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {From autonomous driving cars to surgical robots, robotic system has enjoyed significant growth over the past decade. With the rapid development in robotics alongside the evolution in the related fields, such as computer vision and machine learning, integrating perception, anticipation and manipulation is key to the success of future robotic system. In this thesis, we explore different ways of such integration to extend the capabilities of a robotic system to take on more challenging real world tasks. On anticipation and perception, we address the recognition of ongoing activity from videos. In particular we focus on long-duration and complex activities and hence propose a new challenging dataset to facilitate the work. We introduce hierarchical labels over the activity classes and investigate the temporal accuracy-specificity trade-offs. We propose a new method based on recurrent neural networks that learns to predict over this hierarchy and realize accuracy specificity trade-offs. Our method outperforms several baselines on this new challenge. On manipulation with perception, we propose an efficient framework for programming a robot to use human tools. We first present a novel and compact model for using tools described by a tip model. Then we explore a strategy of utilizing a dual-gripper approach for manipulating tools -- motivated by the absence of dexterous hands on widely available general purpose robots. Afterwards, we embed the tool use learning into a hierarchical architecture and evaluate it on a Baxter research robot. Finally, combining perception, anticipation and manipulation, we focus on a block stacking task. First we explore how to guide robot to place a single block into the scene without collapsing the existing structure. We introduce a mechanism to predict physical stability directly from visual input and evaluate it first on a synthetic data and then on real-world block stacking. Further, we introduce the target stacking task where the agent stacks blocks to reproduce a tower shown in an image. To do so, we create a synthetic block stacking environment with physics simulation in which the agent can learn block stacking end-to-end through trial and error, bypassing to explicitly model the corresponding physics knowledge. We propose a goal-parametrized GDQN model to plan with respect to the specific goal. We validate the model on both a navigation task in a classic gridworld environment and the block stacking task.}, }
Endnote
%0 Thesis %A Li, Wenbin %Y Fritz, Mario %A referee: Leonardis, Ale&#353; %A referee: Slussalek, Philip %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T From Perception over Anticipation to Manipulation : %G eng %U http://hdl.handle.net/21.11116/0000-0001-4193-F %R 10.22028/D291-27156 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P 165 p. %V phd %9 phd %X From autonomous driving cars to surgical robots, robotic system has enjoyed significant growth over the past decade. With the rapid development in robotics alongside the evolution in the related fields, such as computer vision and machine learning, integrating perception, anticipation and manipulation is key to the success of future robotic system. In this thesis, we explore different ways of such integration to extend the capabilities of a robotic system to take on more challenging real world tasks. On anticipation and perception, we address the recognition of ongoing activity from videos. In particular we focus on long-duration and complex activities and hence propose a new challenging dataset to facilitate the work. We introduce hierarchical labels over the activity classes and investigate the temporal accuracy-specificity trade-offs. We propose a new method based on recurrent neural networks that learns to predict over this hierarchy and realize accuracy specificity trade-offs. Our method outperforms several baselines on this new challenge. On manipulation with perception, we propose an efficient framework for programming a robot to use human tools. We first present a novel and compact model for using tools described by a tip model. Then we explore a strategy of utilizing a dual-gripper approach for manipulating tools &#8211; motivated by the absence of dexterous hands on widely available general purpose robots. Afterwards, we embed the tool use learning into a hierarchical architecture and evaluate it on a Baxter research robot. Finally, combining perception, anticipation and manipulation, we focus on a block stacking task. First we explore how to guide robot to place a single block into the scene without collapsing the existing structure. We introduce a mechanism to predict physical stability directly from visual input and evaluate it first on a synthetic data and then on real-world block stacking. Further, we introduce the target stacking task where the agent stacks blocks to reproduce a tower shown in an image. To do so, we create a synthetic block stacking environment with physics simulation in which the agent can learn block stacking end-to-end through trial and error, bypassing to explicitly model the corresponding physics knowledge. We propose a goal-parametrized GDQN model to plan with respect to the specific goal. We validate the model on both a navigation task in a classic gridworld environment and the block stacking task. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27026
[29]
A. Mishra, “Leveraging Semantic Annotations for Event-focused Search & Summarization,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Today in this Big Data era, overwhelming amounts of textual information across different sources with a high degree of redundancy has made it hard for a consumer to retrospect on past events. A plausible solution is to link semantically similar information contained across the different sources to enforce a structure thereby providing multiple access paths to relevant information. Keeping this larger goal in view, this work uses Wikipedia and online news articles as two prominent yet disparate information sources to address the following three problems: • We address a linking problem to connect Wikipedia excerpts to news articles by casting it into an IR task. Our novel approach integrates time, geolocations, and entities with text to identify relevant documents that can be linked to a given excerpt. • We address an unsupervised extractive multi-document summarization task to generate a fixed-length event digest that facilitates efficient consumption of information contained within a large set of documents. Our novel approach proposes an ILP for global inference across text, time, geolocations, and entities associated with the event. • To estimate temporal focus of short event descriptions, we present a semi-supervised approach that leverages redundancy within a longitudinal news collection to estimate accurate probabilistic time models. Extensive experimental evaluations demonstrate the effectiveness and viability of our proposed approaches towards achieving the larger goal.
Export
BibTeX
@phdthesis{Mishraphd2018, TITLE = {Leveraging Semantic Annotations for Event-focused Search \& Summarization}, AUTHOR = {Mishra, Arunav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-271081}, DOI = {10.22028/D291-27108}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Today in this Big Data era, overwhelming amounts of textual information across different sources with a high degree of redundancy has made it hard for a consumer to retrospect on past events. A plausible solution is to link semantically similar information contained across the different sources to enforce a structure thereby providing multiple access paths to relevant information. Keeping this larger goal in view, this work uses Wikipedia and online news articles as two prominent yet disparate information sources to address the following three problems: \mbox{$\bullet$} We address a linking problem to connect Wikipedia excerpts to news articles by casting it into an IR task. Our novel approach integrates time, geolocations, and entities with text to identify relevant documents that can be linked to a given excerpt. \mbox{$\bullet$} We address an unsupervised extractive multi-document summarization task to generate a fixed-length event digest that facilitates efficient consumption of information contained within a large set of documents. Our novel approach proposes an ILP for global inference across text, time, geolocations, and entities associated with the event. \mbox{$\bullet$} To estimate temporal focus of short event descriptions, we present a semi-supervised approach that leverages redundancy within a longitudinal news collection to estimate accurate probabilistic time models. Extensive experimental evaluations demonstrate the effectiveness and viability of our proposed approaches towards achieving the larger goal.}, }
Endnote
%0 Thesis %A Mishra, Arunav %Y Berberich, Klaus %A referee: Weikum, Gerhard %A referee: Hauff, Claudia %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Leveraging Semantic Annotations for Event-focused Search & Summarization : %G eng %U http://hdl.handle.net/21.11116/0000-0001-1844-8 %U urn:nbn:de:bsz:291-scidok-ds-271081 %R 10.22028/D291-27108 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %8 08.02.2018 %P 252 p. %V phd %9 phd %X Today in this Big Data era, overwhelming amounts of textual information across different sources with a high degree of redundancy has made it hard for a consumer to retrospect on past events. A plausible solution is to link semantically similar information contained across the different sources to enforce a structure thereby providing multiple access paths to relevant information. Keeping this larger goal in view, this work uses Wikipedia and online news articles as two prominent yet disparate information sources to address the following three problems: &#8226; We address a linking problem to connect Wikipedia excerpts to news articles by casting it into an IR task. Our novel approach integrates time, geolocations, and entities with text to identify relevant documents that can be linked to a given excerpt. &#8226; We address an unsupervised extractive multi-document summarization task to generate a fixed-length event digest that facilitates efficient consumption of information contained within a large set of documents. Our novel approach proposes an ILP for global inference across text, time, geolocations, and entities associated with the event. &#8226; To estimate temporal focus of short event descriptions, we present a semi-supervised approach that leverages redundancy within a longitudinal news collection to estimate accurate probabilistic time models. Extensive experimental evaluations demonstrate the effectiveness and viability of our proposed approaches towards achieving the larger goal. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26995
[30]
S. J. Oh, “Image Manipulation against Learned Models Privacy and Security Implications,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Machine learning is transforming the world. Its application areas span privacy sensitive and security critical tasks such as human identification and self-driving cars. These applications raise privacy and security related questions that are not fully understood or answered yet: Can automatic person recognisers identify people in photos even when their faces are blurred? How easy is it to find an adversarial input for a self-driving car that makes it drive off the road? This thesis contributes one of the first steps towards a better understanding of such concerns. We observe that many privacy and security critical scenarios for learned models involve input data manipulation: users obfuscate their identity by blurring their faces and adversaries inject imperceptible perturbations to the input signal. We introduce a data manipulator framework as a tool for collectively describing and analysing privacy and security relevant scenarios involving learned models. A data manipulator introduces a shift in data distribution for achieving privacy or security related goals, and feeds the transformed input to the target model. This framework provides a common perspective on the studies presented in the thesis. We begin the studies from the user’s privacy point of view. We analyse the efficacy of common obfuscation methods like face blurring, and show that they are surprisingly ineffective against state of the art person recognition systems. We then propose alternatives based on head inpainting and adversarial examples. By studying the user privacy, we also study the dual problem: model security. In model security perspective, a model ought to be robust and reliable against small amounts of data manipulation. In both cases, data are manipulated with the goal of changing the target model prediction. User privacy and model security problems can be described with the same objective. We then study the knowledge aspect of the data manipulation problem. The more one knows about the target model, the more effective manipulations one can craft. We propose a game theoretic manipulation framework to systematically represent the knowledge level on the target model and derive privacy and security guarantees. We then discuss ways to increase knowledge about a black-box model by only querying it, deriving implications that are relevant to both privacy and security perspectives.
Export
BibTeX
@phdthesis{Ohphd18, TITLE = {Image Manipulation against Learned Models Privacy and Security Implications}, AUTHOR = {Oh, Seong Joon}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-273042}, DOI = {10.22028/D291-27304}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Machine learning is transforming the world. Its application areas span privacy sensitive and security critical tasks such as human identification and self-driving cars. These applications raise privacy and security related questions that are not fully understood or answered yet: Can automatic person recognisers identify people in photos even when their faces are blurred? How easy is it to find an adversarial input for a self-driving car that makes it drive off the road? This thesis contributes one of the first steps towards a better understanding of such concerns. We observe that many privacy and security critical scenarios for learned models involve input data manipulation: users obfuscate their identity by blurring their faces and adversaries inject imperceptible perturbations to the input signal. We introduce a data manipulator framework as a tool for collectively describing and analysing privacy and security relevant scenarios involving learned models. A data manipulator introduces a shift in data distribution for achieving privacy or security related goals, and feeds the transformed input to the target model. This framework provides a common perspective on the studies presented in the thesis. We begin the studies from the user{\textquoteright}s privacy point of view. We analyse the efficacy of common obfuscation methods like face blurring, and show that they are surprisingly ineffective against state of the art person recognition systems. We then propose alternatives based on head inpainting and adversarial examples. By studying the user privacy, we also study the dual problem: model security. In model security perspective, a model ought to be robust and reliable against small amounts of data manipulation. In both cases, data are manipulated with the goal of changing the target model prediction. User privacy and model security problems can be described with the same objective. We then study the knowledge aspect of the data manipulation problem. The more one knows about the target model, the more effective manipulations one can craft. We propose a game theoretic manipulation framework to systematically represent the knowledge level on the target model and derive privacy and security guarantees. We then discuss ways to increase knowledge about a black-box model by only querying it, deriving implications that are relevant to both privacy and security perspectives.}, }
Endnote
%0 Thesis %A Oh, Seong Joon %Y Schiele, Bernt %A referee: Fritz, Mario %A referee: Shmatikov, Vitaly %A referee: Belongie, Serge %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Image Manipulation against Learned Models Privacy and Security Implications : %G eng %U http://hdl.handle.net/21.11116/0000-0001-E481-B %R 10.22028/D291-27304 %U urn:nbn:de:bsz:291-scidok-ds-273042 %F OTHER: hdl:20.500.11880/27146 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P 218 p. %V phd %9 phd %X Machine learning is transforming the world. Its application areas span privacy sensitive and security critical tasks such as human identification and self-driving cars. These applications raise privacy and security related questions that are not fully understood or answered yet: Can automatic person recognisers identify people in photos even when their faces are blurred? How easy is it to find an adversarial input for a self-driving car that makes it drive off the road? This thesis contributes one of the first steps towards a better understanding of such concerns. We observe that many privacy and security critical scenarios for learned models involve input data manipulation: users obfuscate their identity by blurring their faces and adversaries inject imperceptible perturbations to the input signal. We introduce a data manipulator framework as a tool for collectively describing and analysing privacy and security relevant scenarios involving learned models. A data manipulator introduces a shift in data distribution for achieving privacy or security related goals, and feeds the transformed input to the target model. This framework provides a common perspective on the studies presented in the thesis. We begin the studies from the user&#8217;s privacy point of view. We analyse the efficacy of common obfuscation methods like face blurring, and show that they are surprisingly ineffective against state of the art person recognition systems. We then propose alternatives based on head inpainting and adversarial examples. By studying the user privacy, we also study the dual problem: model security. In model security perspective, a model ought to be robust and reliable against small amounts of data manipulation. In both cases, data are manipulated with the goal of changing the target model prediction. User privacy and model security problems can be described with the same objective. We then study the knowledge aspect of the data manipulation problem. The more one knows about the target model, the more effective manipulations one can craft. We propose a game theoretic manipulation framework to systematically represent the knowledge level on the target model and derive privacy and security guarantees. We then discuss ways to increase knowledge about a black-box model by only querying it, deriving implications that are relevant to both privacy and security perspectives. %U http://dx.doi.org/10.22028/D291-27304
[31]
A. Teucke, “An Approximation and Refinement Approach to First-Order Automated Reasoning,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
With the goal of lifting model-based guidance from the propositional setting to first- order logic, I have developed an approximation theorem proving approach based on counterexample-guided abstraction refinement. A given clause set is transformed into a simplified form where satisfiability is decidable. This approximation extends the signature and preserves unsatisfiability: if the simplified clause set is satisfi- able, so is the original clause set. A resolution refutation generated by a decision procedure on the simplified clause set can then either be lifted to a refutation in the original clause set, or it guides a refinement excluding the previously found unliftable refutation. This way the approach is refutationally complete. The monadic shallow linear Horn fragment, which is the initial target of the approximation, is well-known to be decidable. It was a long standing open prob- lem how to extend the fragment to the non-Horn case, preserving decidability, that would, e.g., enable to express non-determinism in protocols. I have now proven de- cidability of the non-Horn monadic shallow linear fragment via ordered resolution. I further extend the clause language with a new type of constraints, called straight dismatching constraints. The extended clause language is motivated by an improved refinement of the approximation-refinement framework. All needed oper- ations on straight dismatching constraints take linear or linear logarithmic time in the size of the constraint. Ordered resolution with straight dismatching constraints is sound and complete and the monadic shallow linear fragment with straight dis- matching constraints is decidable. I have implemented my approach based on the SPASS theorem prover. On cer- tain satisfiable problems, the implementation shows the ability to beat established provers such as SPASS, iProver, and Vampire.
Export
BibTeX
@phdthesis{Teuckephd2018, TITLE = {An Approximation and Refinement Approach to First-Order Automated Reasoning}, AUTHOR = {Teucke, Andreas}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-271963}, DOI = {10.22028/D291-27196}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {With the goal of lifting model-based guidance from the propositional setting to first- order logic, I have developed an approximation theorem proving approach based on counterexample-guided abstraction refinement. A given clause set is transformed into a simplified form where satisfiability is decidable. This approximation extends the signature and preserves unsatisfiability: if the simplified clause set is satisfi- able, so is the original clause set. A resolution refutation generated by a decision procedure on the simplified clause set can then either be lifted to a refutation in the original clause set, or it guides a refinement excluding the previously found unliftable refutation. This way the approach is refutationally complete. The monadic shallow linear Horn fragment, which is the initial target of the approximation, is well-known to be decidable. It was a long standing open prob- lem how to extend the fragment to the non-Horn case, preserving decidability, that would, e.g., enable to express non-determinism in protocols. I have now proven de- cidability of the non-Horn monadic shallow linear fragment via ordered resolution. I further extend the clause language with a new type of constraints, called straight dismatching constraints. The extended clause language is motivated by an improved refinement of the approximation-refinement framework. All needed oper- ations on straight dismatching constraints take linear or linear logarithmic time in the size of the constraint. Ordered resolution with straight dismatching constraints is sound and complete and the monadic shallow linear fragment with straight dis- matching constraints is decidable. I have implemented my approach based on the SPASS theorem prover. On cer- tain satisfiable problems, the implementation shows the ability to beat established provers such as SPASS, iProver, and Vampire.}, }
Endnote
%0 Thesis %A Teucke, Andreas %A referee: Korovin, Konstatin %Y Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Automation of Logic, MPI for Informatics, Max Planck Society %T An Approximation and Refinement Approach to First-Order Automated Reasoning : %G eng %U http://hdl.handle.net/21.11116/0000-0001-8E49-E %R 10.22028/D291-27196 %U urn:nbn:de:bsz:291-scidok-ds-271963 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P XIV, 133 p. %V phd %9 phd %X With the goal of lifting model-based guidance from the propositional setting to first- order logic, I have developed an approximation theorem proving approach based on counterexample-guided abstraction refinement. A given clause set is transformed into a simplified form where satisfiability is decidable. This approximation extends the signature and preserves unsatisfiability: if the simplified clause set is satisfi- able, so is the original clause set. A resolution refutation generated by a decision procedure on the simplified clause set can then either be lifted to a refutation in the original clause set, or it guides a refinement excluding the previously found unliftable refutation. This way the approach is refutationally complete. The monadic shallow linear Horn fragment, which is the initial target of the approximation, is well-known to be decidable. It was a long standing open prob- lem how to extend the fragment to the non-Horn case, preserving decidability, that would, e.g., enable to express non-determinism in protocols. I have now proven de- cidability of the non-Horn monadic shallow linear fragment via ordered resolution. I further extend the clause language with a new type of constraints, called straight dismatching constraints. The extended clause language is motivated by an improved refinement of the approximation-refinement framework. All needed oper- ations on straight dismatching constraints take linear or linear logarithmic time in the size of the constraint. Ordered resolution with straight dismatching constraints is sound and complete and the monadic shallow linear fragment with straight dis- matching constraints is decidable. I have implemented my approach based on the SPASS theorem prover. On cer- tain satisfiable problems, the implementation shows the ability to beat established provers such as SPASS, iProver, and Vampire. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27069
[32]
X. Zhang, “Gaze Estimation and Interaction in Real-World Environments,” Universität des Saarlandes, Saarbrücken, 2018.
Export
BibTeX
@phdthesis{Zhang_PhD2018, TITLE = {Gaze Estimation and Interaction in Real-World Environments}, AUTHOR = {Zhang, Xucong}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-273666}, DOI = {10.22028/D291-27366}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, }
Endnote
%0 Thesis %A Zhang, Xucong %Y Bulling, Andreas %A referee: Schiele, Bernt %A referee: Andr&#233;, Elisabeth %A referee: Sato, Yoichi %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Gaze Estimation and Interaction in Real-World Environments : %G eng %U http://hdl.handle.net/21.11116/0000-0002-5C10-5 %R 10.22028/D291-27366 %U urn:nbn:de:bsz:291-scidok-ds-273666 %F OTHER: hdl:20.500.11880/27187 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P IX, 155 p. %V phd %9 phd %U http://dx.doi.org/10.22028/D291-27366
2017
[33]
R. Becker, “On Flows, Paths, Roots, and Zeros,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{Becker_PhD2018, TITLE = {On Flows, Paths, Roots, and Zeros}, AUTHOR = {Becker, Ruben}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-273348}, DOI = {10.22028/D291-27334}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Becker, Ruben %Y Mehlhorn, Kurt %A referee: Karrenbauer, Andreas %A referee: Sagraloff, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Flows, Paths, Roots, and Zeros : %G eng %U http://hdl.handle.net/21.11116/0000-0003-3931-6 %R 10.22028/D291-27334 %U urn:nbn:de:bsz:291-scidok-ds-273348 %F OTHER: hdl:20.500.11880/27162 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P XI, 168 p. %V phd %9 phd %U http://dx.doi.org/10.22028/D291-27334
[34]
N. Boldyrev, “Alignment of Multi-Cultural Knowledge Repositories,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
The ability to interconnect multiple knowledge repositories within a single framework is a key asset for various use cases such as document retrieval and question answering. However, independently created repositories are inherently heterogeneous, reflecting their diverse origins. Thus, there is a need to align concepts and entities across knowledge repositories. A limitation of prior work is the assumption of high afinity between the repositories at hand, in terms of structure and terminology. The goal of this dissertation is to develop methods for constructing and curating alignments between multi-cultural knowledge repositories. The first contribution is a system, ACROSS, for reducing the terminological gap between repositories. The second contribution is two alignment methods, LILIANA and SESAME, that cope with structural diversity. The third contribution, LAIKA, is an approach to compute alignments between dynamic repositories. Experiments with a suite ofWeb-scale knowledge repositories show high quality alignments. In addition, the application benefits of LILIANA and SESAME are demonstrated by use cases in search and exploration.
Export
BibTeX
@phdthesis{BOLDYREVPHD2017, TITLE = {Alignment of Multi-Cultural Knowledge Repositories}, AUTHOR = {Boldyrev, Natalia}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-269407}, DOI = {10.22028/D291-26940}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The ability to interconnect multiple knowledge repositories within a single framework is a key asset for various use cases such as document retrieval and question answering. However, independently created repositories are inherently heterogeneous, reflecting their diverse origins. Thus, there is a need to align concepts and entities across knowledge repositories. A limitation of prior work is the assumption of high afinity between the repositories at hand, in terms of structure and terminology. The goal of this dissertation is to develop methods for constructing and curating alignments between multi-cultural knowledge repositories. The first contribution is a system, ACROSS, for reducing the terminological gap between repositories. The second contribution is two alignment methods, LILIANA and SESAME, that cope with structural diversity. The third contribution, LAIKA, is an approach to compute alignments between dynamic repositories. Experiments with a suite ofWeb-scale knowledge repositories show high quality alignments. In addition, the application benefits of LILIANA and SESAME are demonstrated by use cases in search and exploration.}, }
Endnote
%0 Thesis %A Boldyrev, Natalia %Y Weikum, Gerhard %A referee: Berberich, Klaus %A referee: Spaniol, Marc %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Alignment of Multi-Cultural Knowledge Repositories : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-87D8-2 %R 10.22028/D291-26940 %U urn:nbn:de:bsz:291-scidok-ds-269407 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %8 06.12.2017 %P X, 124 p. %V phd %9 phd %X The ability to interconnect multiple knowledge repositories within a single framework is a key asset for various use cases such as document retrieval and question answering. However, independently created repositories are inherently heterogeneous, reflecting their diverse origins. Thus, there is a need to align concepts and entities across knowledge repositories. A limitation of prior work is the assumption of high afinity between the repositories at hand, in terms of structure and terminology. The goal of this dissertation is to develop methods for constructing and curating alignments between multi-cultural knowledge repositories. The first contribution is a system, ACROSS, for reducing the terminological gap between repositories. The second contribution is two alignment methods, LILIANA and SESAME, that cope with structural diversity. The third contribution, LAIKA, is an approach to compute alignments between dynamic repositories. Experiments with a suite ofWeb-scale knowledge repositories show high quality alignments. In addition, the application benefits of LILIANA and SESAME are demonstrated by use cases in search and exploration. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26891
[35]
A. Choudhary, “Approximation Algorithms for Vietoris-Rips and Ĉech Filtrations,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Persistent Homology is a tool to analyze and visualize the shape of data from a topological viewpoint. It computes persistence, which summarizes the evolution of topological and geometric information about metric spaces over multiple scales of distances. While computing persistence is quite efficient for low-dimensional topological features, it becomes overwhelmingly expensive for medium to high-dimensional features. In this thesis, we attack this computational problem from several different angles. We present efficient techniques to approximate the persistence of metric spaces. Three of our methods are tailored towards general point clouds in Euclidean spaces. We make use of high dimensional lattice geometry to reduce the cost of the approximations. In particular, we discover several properties of the Permutahedral lattice, whose Voronoi cell is well-known for its combinatorial properties. The last method is suitable for point clouds with low intrinsic dimension, where we exploit the structural properties of the point set to tame the complexity. In some cases, we achieve a reduction in size complexity by trading off the quality of the approximation. Two of our methods work particularly well in conjunction with dimension-reduction techniques: we arrive at the first approximation schemes whose complexities are only polynomial in the size of the point cloud, and independent of the ambient dimension. On the other hand, we provide a lower bound result: we construct a point cloud that requires super-polynomial complexity for a high-quality approximation of the persistence. Together with our approximation schemes, we show that polynomial complexity is achievable for rough approximations, but impossible for sufficiently fine approximations. For some metric spaces, the intrinsic dimension is low in small neighborhoods of the input points, but much higher for large scales of distances. We develop a concept of local intrinsic dimension to capture this property. We also present several applications of this concept, including an approximation method for persistence. This thesis is written in English.
Export
BibTeX
@phdthesis{Choudharyphd2017, TITLE = {Approximation Algorithms for {V}ietoris-Rips and \v{C}ech Filtrations}, AUTHOR = {Choudhary, Aruni}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-269597}, DOI = {10.22028/D291-26959}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Persistent Homology is a tool to analyze and visualize the shape of data from a topological viewpoint. It computes persistence, which summarizes the evolution of topological and geometric information about metric spaces over multiple scales of distances. While computing persistence is quite efficient for low-dimensional topological features, it becomes overwhelmingly expensive for medium to high-dimensional features. In this thesis, we attack this computational problem from several different angles. We present efficient techniques to approximate the persistence of metric spaces. Three of our methods are tailored towards general point clouds in Euclidean spaces. We make use of high dimensional lattice geometry to reduce the cost of the approximations. In particular, we discover several properties of the Permutahedral lattice, whose Voronoi cell is well-known for its combinatorial properties. The last method is suitable for point clouds with low intrinsic dimension, where we exploit the structural properties of the point set to tame the complexity. In some cases, we achieve a reduction in size complexity by trading off the quality of the approximation. Two of our methods work particularly well in conjunction with dimension-reduction techniques: we arrive at the first approximation schemes whose complexities are only polynomial in the size of the point cloud, and independent of the ambient dimension. On the other hand, we provide a lower bound result: we construct a point cloud that requires super-polynomial complexity for a high-quality approximation of the persistence. Together with our approximation schemes, we show that polynomial complexity is achievable for rough approximations, but impossible for sufficiently fine approximations. For some metric spaces, the intrinsic dimension is low in small neighborhoods of the input points, but much higher for large scales of distances. We develop a concept of local intrinsic dimension to capture this property. We also present several applications of this concept, including an approximation method for persistence. This thesis is written in English.}, }
Endnote
%0 Thesis %A Choudhary, Aruni %A referee: Kerber, Michael %Y Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximation Algorithms for Vietoris-Rips and &#264;ech Filtrations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-8D63-5 %U urn:nbn:de:bsz:291-scidok-ds-269597 %R 10.22028/D291-26959 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 123 p. %V phd %9 phd %X Persistent Homology is a tool to analyze and visualize the shape of data from a topological viewpoint. It computes persistence, which summarizes the evolution of topological and geometric information about metric spaces over multiple scales of distances. While computing persistence is quite efficient for low-dimensional topological features, it becomes overwhelmingly expensive for medium to high-dimensional features. In this thesis, we attack this computational problem from several different angles. We present efficient techniques to approximate the persistence of metric spaces. Three of our methods are tailored towards general point clouds in Euclidean spaces. We make use of high dimensional lattice geometry to reduce the cost of the approximations. In particular, we discover several properties of the Permutahedral lattice, whose Voronoi cell is well-known for its combinatorial properties. The last method is suitable for point clouds with low intrinsic dimension, where we exploit the structural properties of the point set to tame the complexity. In some cases, we achieve a reduction in size complexity by trading off the quality of the approximation. Two of our methods work particularly well in conjunction with dimension-reduction techniques: we arrive at the first approximation schemes whose complexities are only polynomial in the size of the point cloud, and independent of the ambient dimension. On the other hand, we provide a lower bound result: we construct a point cloud that requires super-polynomial complexity for a high-quality approximation of the persistence. Together with our approximation schemes, we show that polynomial complexity is achievable for rough approximations, but impossible for sufficiently fine approximations. For some metric spaces, the intrinsic dimension is low in small neighborhoods of the input points, but much higher for large scales of distances. We develop a concept of local intrinsic dimension to capture this property. We also present several applications of this concept, including an approximation method for persistence. This thesis is written in English. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26911
[36]
C. Croitoru, “Graph Models for Rational Social Interaction,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{CroitoruPhd2017, TITLE = {Graph Models for Rational Social Interaction}, AUTHOR = {Croitoru, Cosmina}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-270576}, DOI = {10.22028/D291-27057}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Croitoru, Cosmina %Y Mehlhorn, Kurt %A referee: Amgoud, Leila %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Graph Models for Rational Social Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-87DE-5 %R 10.22028/D291-27057 %U urn:nbn:de:bsz:291-scidok-ds-270576 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P X, 75 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26954
[37]
P. Danilewski, “ManyDSL One Host for All Language Need,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can “abuse” sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) — all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.
Export
BibTeX
@phdthesis{Danilewskiphd17, TITLE = {Many{DSL} One Host for All Language Need}, AUTHOR = {Danilewski, Piotr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68840}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can {\textquotedblleft}abuse{\textquotedblright} sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) --- all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.}, }
Endnote
%0 Thesis %A Danilewski, Piotr %Y Slussalek, Philipp %A referee: Reinhard, Wilhelm %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T ManyDSL One Host for All Language Need : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-934E-8 %U urn:nbn:de:bsz:291-scidok-68840 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 257 p. %V phd %9 phd %X Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can &#8220;abuse&#8221; sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) &#8212; all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6884/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[38]
M. Dirnberger, “Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms.
Export
BibTeX
@phdthesis{dirnbergerphd17, TITLE = {Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum}, AUTHOR = {Dirnberger, Michael}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69424}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms.}, }
Endnote
%0 Thesis %A Dirnberger, Michael %Y Mehlhorn, Kurt %A referee: Grube, Martin %A referee: D&#246;bereiner, Hans-G&#252;nther %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-DE4F-0 %U urn:nbn:de:bsz:291-scidok-69424 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P XV, 193 p. %V phd %9 phd %X This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6942/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[39]
S. Dutta, “Efficient knowledge Management for Named Entities from Text,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.
Export
BibTeX
@phdthesis{duttaphd17, TITLE = {Efficient knowledge Management for Named Entities from Text}, AUTHOR = {Dutta, Sourav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67924}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.}, }
Endnote
%0 Thesis %A Dutta, Sourav %Y Weikum, Gerhard %A referee: Nejdl, Wolfgang %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient knowledge Management for Named Entities from Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-A793-E %U urn:nbn:de:bsz:291-scidok-67924 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P xv, 134 p. %V phd %9 phd %X The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6792/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[40]
S. Friedrichs, “Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding,” Unversität des Saarlandes, Saarbrücken, 2017.
Abstract
We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver.
Export
BibTeX
@phdthesis{Friedrichsphd2017, TITLE = {Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding}, AUTHOR = {Friedrichs, Stephan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69660}, SCHOOL = {Unversit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver.}, }
Endnote
%0 Thesis %A Friedrichs, Stephan %Y Lenzen, Christoph %A referee: Mehlhorn, Kurt %A referee: Ghaffari, Mohsen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-E9A7-B %U urn:nbn:de:bsz:291-scidok-69660 %I Unversit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P x, 226 p. %V phd %9 phd %X We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6966/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[41]
P. Garrido, “High-quality face capture, animation and editing from monocular video,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{Garridophd17, TITLE = {High-quality face capture, animation and editing from monocular video}, AUTHOR = {Garrido, Pablo}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69419}, DOI = {10.22028/D291-26785}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Garrido, Pablo %Y Theobalt, Christian %A referee: Perez, Patrick %A referee: Pauly, Mark %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T High-quality face capture, animation and editing from monocular video : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-D1BC-2 %U urn:nbn:de:bsz:291-scidok-69419 %R 10.22028/D291-26785 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 185 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6941/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[42]
Y. Gryaditskaya, “High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The “HDR mode” often encountered on such devices, relies on techniques called “exposure fusion” and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.
Export
BibTeX
@phdthesis{Gryphd17, TITLE = {High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing}, AUTHOR = {Gryaditskaya, Yulia}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69296}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The {\textquotedblleft}HDR mode{\textquotedblright} often encountered on such devices, relies on techniques called {\textquotedblleft}exposure fusion{\textquotedblright} and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.}, }
Endnote
%0 Thesis %A Gryaditskaya, Yulia %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-ABA6-3 %U urn:nbn:de:bsz:291-scidok-69296 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 88 p. %V phd %9 phd %X Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The &#8220;HDR mode&#8221; often encountered on such devices, relies on techniques called &#8220;exposure fusion&#8221; and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6929/
[43]
A. Grycner, “Constructing Lexicons of Relational Phrases,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.
Export
BibTeX
@phdthesis{Grynerphd17, TITLE = {Constructing Lexicons of Relational Phrases}, AUTHOR = {Grycner, Adam}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69101}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.}, }
Endnote
%0 Thesis %A Grycner, Adam %Y Weikum, Gerhard %A referee: Klakow, Dietrich %A referee: Ponzetto, Simone Paolo %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Constructing Lexicons of Relational Phrases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-933B-1 %U urn:nbn:de:bsz:291-scidok-69101 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 125 p. %V phd %9 phd %X Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6910/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[44]
S. Gurajada, “Distributed Querying of Large Labeled Graphs,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the “Labeled Graph”, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. • Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. • Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. • Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.
Export
BibTeX
@phdthesis{guraphd2017, TITLE = {Distributed Querying of Large Labeled Graphs}, AUTHOR = {Gurajada, Sairam}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67738}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the {\textquotedblleft}Labeled Graph{\textquotedblright}, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. \mbox{$\bullet$} Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. \mbox{$\bullet$} Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. \mbox{$\bullet$} Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined {\textquotedblleft}TriAD{\textquotedblright} (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.}, }
Endnote
%0 Thesis %A Gurajada, Sairam %Y Theobald, Martin %A referee: Weikum, Gerhard %A referee: &#214;zsu, M. Tamer %A referee: Michel, Sebastian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Distributed Querying of Large Labeled Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8202-E %U urn:nbn:de:bsz:291-scidok-67738 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P x, 167 p. %V phd %9 phd %X Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the &#8220;Labeled Graph&#8221;, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. &#8226; Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. &#8226; Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. &#8226; Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined &#8220;TriAD&#8221; (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6773/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[45]
V. Hashemi, “Decision Algorithms for Modelling, Optional Control and Verification of Probalistic Systems,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Markov Decision Processes (MDPs) constitute a mathematical framework for modelling systems featuring both probabilistic and nondeterministic behaviour. They are widely used to solve sequential decision making problems and applied successfully in operations research, arti?cial intelligence, and stochastic control theory, and have been extended conservatively to the model of probabilistic automata in the context of concurrent probabilistic systems. However, when modeling a physical system they suffer from several limitations. One of the most important is the inherent loss of precision that is introduced by measurement errors and discretization artifacts which necessarily happen due to incomplete knowledge about the system behavior. As a result, the true probability distribution for transitions is in most cases an uncertain value, determined by either external parameters or con?dence intervals. Interval Markov decision processes (IMDPs) generalize classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that re?ects the absence of precise knowledge concerning transition probabilities. In this dissertation, we focus on decision algorithms for modelling and performance evaluation of such probabilistic systems leveraging techniques from mathematical optimization. From a modelling viewpoint, we address probabilistic bisimulations to reduce the size of the system models while preserving the logical properties they satisfy. We also discuss the key ingredients to construct systems by composing them out of smaller components running in parallel. Furthermore, we introduce a novel stochastic model, Uncertain weighted Markov Decision Processes (UwMDPs), so as to capture quantities like preferences or priorities in a nondeterministic scenario with uncertainties. This model is close to the model of IMDPs but more convenient to work with in the context of bisimulation minimization. From a performance evaluation perspective, we consider the problem of multi-objective robust strategy synthesis for IMDPs, where the aim is to ?nd a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. In this respect, we discuss the computational complexity of the problem and present a value iteration-based decision algorithm to approximate the Pareto set of achievable optimal points. Moreover, we consider the problem of computing maximal/minimal reward-bounded reachability probabilities on UwMDPs, for which we present an ef?cient algorithm running in pseudo-polynomial time. We demonstrate the practical effectiveness of our proposed approaches by applying them to a collection of real-world case studies using several prototypical tools.
Export
BibTeX
@phdthesis{Hashemiphd2018, TITLE = {Decision Algorithms for Modelling, Optional Control and Verification of Probalistic Systems}, AUTHOR = {Hashemi, Vahid}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-270397}, DOI = {10.22028/D291-27039}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Markov Decision Processes (MDPs) constitute a mathematical framework for modelling systems featuring both probabilistic and nondeterministic behaviour. They are widely used to solve sequential decision making problems and applied successfully in operations research, arti?cial intelligence, and stochastic control theory, and have been extended conservatively to the model of probabilistic automata in the context of concurrent probabilistic systems. However, when modeling a physical system they suffer from several limitations. One of the most important is the inherent loss of precision that is introduced by measurement errors and discretization artifacts which necessarily happen due to incomplete knowledge about the system behavior. As a result, the true probability distribution for transitions is in most cases an uncertain value, determined by either external parameters or con?dence intervals. Interval Markov decision processes (IMDPs) generalize classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that re?ects the absence of precise knowledge concerning transition probabilities. In this dissertation, we focus on decision algorithms for modelling and performance evaluation of such probabilistic systems leveraging techniques from mathematical optimization. From a modelling viewpoint, we address probabilistic bisimulations to reduce the size of the system models while preserving the logical properties they satisfy. We also discuss the key ingredients to construct systems by composing them out of smaller components running in parallel. Furthermore, we introduce a novel stochastic model, Uncertain weighted Markov Decision Processes (UwMDPs), so as to capture quantities like preferences or priorities in a nondeterministic scenario with uncertainties. This model is close to the model of IMDPs but more convenient to work with in the context of bisimulation minimization. From a performance evaluation perspective, we consider the problem of multi-objective robust strategy synthesis for IMDPs, where the aim is to ?nd a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. In this respect, we discuss the computational complexity of the problem and present a value iteration-based decision algorithm to approximate the Pareto set of achievable optimal points. Moreover, we consider the problem of computing maximal/minimal reward-bounded reachability probabilities on UwMDPs, for which we present an ef?cient algorithm running in pseudo-polynomial time. We demonstrate the practical effectiveness of our proposed approaches by applying them to a collection of real-world case studies using several prototypical tools.}, }
Endnote
%0 Thesis %A Hashemi, Vahid %Y Hermanns, Holger %A referee: Kiefer, Sascha %A referee: Dehlahye, Benoit %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Decision Algorithms for Modelling, Optional Control and Verification of Probalistic Systems : %G eng %U http://hdl.handle.net/21.11116/0000-0000-E420-A %U urn:nbn:de:bsz:291-scidok-ds-270397 %R 10.22028/D291-27039 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 229 p. %V phd %9 phd %X Markov Decision Processes (MDPs) constitute a mathematical framework for modelling systems featuring both probabilistic and nondeterministic behaviour. They are widely used to solve sequential decision making problems and applied successfully in operations research, arti?cial intelligence, and stochastic control theory, and have been extended conservatively to the model of probabilistic automata in the context of concurrent probabilistic systems. However, when modeling a physical system they suffer from several limitations. One of the most important is the inherent loss of precision that is introduced by measurement errors and discretization artifacts which necessarily happen due to incomplete knowledge about the system behavior. As a result, the true probability distribution for transitions is in most cases an uncertain value, determined by either external parameters or con?dence intervals. Interval Markov decision processes (IMDPs) generalize classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that re?ects the absence of precise knowledge concerning transition probabilities. In this dissertation, we focus on decision algorithms for modelling and performance evaluation of such probabilistic systems leveraging techniques from mathematical optimization. From a modelling viewpoint, we address probabilistic bisimulations to reduce the size of the system models while preserving the logical properties they satisfy. We also discuss the key ingredients to construct systems by composing them out of smaller components running in parallel. Furthermore, we introduce a novel stochastic model, Uncertain weighted Markov Decision Processes (UwMDPs), so as to capture quantities like preferences or priorities in a nondeterministic scenario with uncertainties. This model is close to the model of IMDPs but more convenient to work with in the context of bisimulation minimization. From a performance evaluation perspective, we consider the problem of multi-objective robust strategy synthesis for IMDPs, where the aim is to ?nd a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. In this respect, we discuss the computational complexity of the problem and present a value iteration-based decision algorithm to approximate the Pareto set of achievable optimal points. Moreover, we consider the problem of computing maximal/minimal reward-bounded reachability probabilities on UwMDPs, for which we present an ef?cient algorithm running in pseudo-polynomial time. We demonstrate the practical effectiveness of our proposed approaches by applying them to a collection of real-world case studies using several prototypical tools. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26947
[46]
J. Hosang, “Analysis and Improvement of the Visual Object Detection Pipeline,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression.
Export
BibTeX
@phdthesis{Hosangphd17, TITLE = {Analysis and Improvement of the Visual Object Detection Pipeline}, AUTHOR = {Hosang, Jan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69080}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression.}, }
Endnote
%0 Thesis %A Hosang, Jan %Y Schiele, Bernt %A referee: Ferrari, Vittorio %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Analysis and Improvement of the Visual Object Detection Pipeline : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8CC9-B %U urn:nbn:de:bsz:291-scidok-69080 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 205 p. %V phd %9 phd %X Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6908/
[47]
K. Hui, “Automatic Methods for Low-Cost Evaluation and Position-Aware Models for Neural Information Retrieval,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
An information retrieval (IR) system assists people in consuming huge amount of data, where the evaluation and the construction of such systems are important. However, there exist two difficulties: the overwhelmingly large number of query-document pairs to judge, making IR evaluation a manually laborious task; and the complicated patterns to model due to the non-symmetric, heterogeneous relationships between a query-document pair, where different interaction patterns such as term dependency and proximity have been demonstrated to be useful, yet are non-trivial for a single IR model to encode. In this thesis we attempt to address both difficulties from the perspectives of IR evaluation and of the retrieval model respectively, by reducing the manual cost with automatic methods, by investigating the usage of crowdsourcing in collecting preference judgments, and by proposing novel neural retrieval models. In particular, to address the large number of query-document pairs in IR evaluation, a low-cost selective labeling method is proposed to pick out a small subset of representative documents for manual judgments in favor of the follow-up prediction for the remaining query-document pairs; furthermore, a language-model based cascade measure framework is developed to evaluate the novelty and diversity, utilizing the content of the labeled documents to mitigate incomplete labels. In addition, we also attempt to make the preference judgments practically usable by empirically investigating different properties of the judgments when collected via crowdsourcing; and by proposing a novel judgment mechanism, making a compromise between the judgment quality and the number of judgments. Finally, to model different complicated patterns in a single retrieval model, inspired by the recent advances in deep learning, we develop novel neural IR models to incorporate different patterns like term dependency, query proximity, density of relevance, and query coverage in a single model. We demonstrate their superior performances through evaluations on different datasets.
Export
BibTeX
@phdthesis{HUiphd2017, TITLE = {Automatic Methods for Low-Cost Evaluation and Position-Aware Models for Neural Information Retrieval}, AUTHOR = {Hui, Kai}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-269423}, DOI = {10.22028/D291-26942}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {An information retrieval (IR) system assists people in consuming huge amount of data, where the evaluation and the construction of such systems are important. However, there exist two difficulties: the overwhelmingly large number of query-document pairs to judge, making IR evaluation a manually laborious task; and the complicated patterns to model due to the non-symmetric, heterogeneous relationships between a query-document pair, where different interaction patterns such as term dependency and proximity have been demonstrated to be useful, yet are non-trivial for a single IR model to encode. In this thesis we attempt to address both difficulties from the perspectives of IR evaluation and of the retrieval model respectively, by reducing the manual cost with automatic methods, by investigating the usage of crowdsourcing in collecting preference judgments, and by proposing novel neural retrieval models. In particular, to address the large number of query-document pairs in IR evaluation, a low-cost selective labeling method is proposed to pick out a small subset of representative documents for manual judgments in favor of the follow-up prediction for the remaining query-document pairs; furthermore, a language-model based cascade measure framework is developed to evaluate the novelty and diversity, utilizing the content of the labeled documents to mitigate incomplete labels. In addition, we also attempt to make the preference judgments practically usable by empirically investigating different properties of the judgments when collected via crowdsourcing; and by proposing a novel judgment mechanism, making a compromise between the judgment quality and the number of judgments. Finally, to model different complicated patterns in a single retrieval model, inspired by the recent advances in deep learning, we develop novel neural IR models to incorporate different patterns like term dependency, query proximity, density of relevance, and query coverage in a single model. We demonstrate their superior performances through evaluations on different datasets.}, }
Endnote
%0 Thesis %A Hui, Kai %Y Berberich, Klaus %A referee: Weikum, Gerhard %A referee: Dietz, Laura %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Automatic Methods for Low-Cost Evaluation and Position-Aware Models for Neural Information Retrieval : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-8921-E %U urn:nbn:de:bsz:291-scidok-ds-269423 %R 10.22028/D291-26942 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P xiv, 130 p. %V phd %9 phd %X An information retrieval (IR) system assists people in consuming huge amount of data, where the evaluation and the construction of such systems are important. However, there exist two difficulties: the overwhelmingly large number of query-document pairs to judge, making IR evaluation a manually laborious task; and the complicated patterns to model due to the non-symmetric, heterogeneous relationships between a query-document pair, where different interaction patterns such as term dependency and proximity have been demonstrated to be useful, yet are non-trivial for a single IR model to encode. In this thesis we attempt to address both difficulties from the perspectives of IR evaluation and of the retrieval model respectively, by reducing the manual cost with automatic methods, by investigating the usage of crowdsourcing in collecting preference judgments, and by proposing novel neural retrieval models. In particular, to address the large number of query-document pairs in IR evaluation, a low-cost selective labeling method is proposed to pick out a small subset of representative documents for manual judgments in favor of the follow-up prediction for the remaining query-document pairs; furthermore, a language-model based cascade measure framework is developed to evaluate the novelty and diversity, utilizing the content of the labeled documents to mitigate incomplete labels. In addition, we also attempt to make the preference judgments practically usable by empirically investigating different properties of the judgments when collected via crowdsourcing; and by proposing a novel judgment mechanism, making a compromise between the judgment quality and the number of judgments. Finally, to model different complicated patterns in a single retrieval model, inspired by the recent advances in deep learning, we develop novel neural IR models to incorporate different patterns like term dependency, query proximity, density of relevance, and query coverage in a single model. We demonstrate their superior performances through evaluations on different datasets. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26894
[48]
J. Kalojanov, “R-symmetry for Triangle Meshes: Detection and Applications,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.
Export
BibTeX
@phdthesis{Kalojanovphd2017, TITLE = {R-symmetry for Triangle Meshes: Detection and Applications}, AUTHOR = {Kalojanov, Javor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.}, }
Endnote
%0 Thesis %A Kalojanov, Javor %Y Slusallek, Philipp %A referee: Wand, Michael %A referee: Mitra, Niloy %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T R-symmetry for Triangle Meshes: Detection and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-96A3-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 94 p. %V phd %9 phd %X In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6787/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[49]
A. Khoreva, “Learning to Segment in Images and Videos with Different Forms of Supervision,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Much progress has been made in image and video segmentation over the last years. To a large extent, the success can be attributed to the strong appearance models completely learned from data, in particular using deep learning methods. However,to perform best these methods require large representative datasets for training with expensive pixel-level annotations, which in case of videos are prohibitive to obtain. Therefore, there is a need to relax this constraint and to consider alternative forms of supervision, which are easier and cheaper to collect. In this thesis, we aim to develop algorithms for learning to segment in images and videos with different levels of supervision. First, we develop approaches for training convolutional networks with weaker forms of supervision, such as bounding boxes or image labels, for object boundary estimation and semantic/instance labelling tasks. We propose to generate pixel-level approximate groundtruth from these weaker forms of annotations to train a network, which allows to achieve high-quality results comparable to the full supervision quality without any modifications of the network architecture or the training procedure. Second, we address the problem of the excessive computational and memory costs inherent to solving video segmentation via graphs. We propose approaches to improve the runtime and memory efficiency as well as the output segmentation quality by learning from the available training data the best representation of the graph. In particular, we contribute with learning must-link constraints, the topology and edge weights of the graph as well as enhancing the graph nodes - superpixels - themselves. Third, we tackle the task of pixel-level object tracking and address the problem of the limited amount of densely annotated video data for training convolutional networks. We introduce an architecture which allows training with static images only and propose an elaborate data synthesis scheme which creates a large number of training examples close to the target domain from the given first frame mask. With the proposed techniques we show that densely annotated consequent video data is not necessary to achieve high-quality temporally coherent video segmentationresults. In summary, this thesis advances the state of the art in weakly supervised image segmentation, graph-based video segmentation and pixel-level object tracking and contributes with the new ways of training convolutional networks with a limited amount of pixel-level annotated training data.
Export
BibTeX
@phdthesis{Khorevaphd2017, TITLE = {Learning to Segment in Images and Videos with Different Forms of Supervision}, AUTHOR = {Khoreva, Anna}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-269954}, DOI = {10.22028/D291-26995}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Much progress has been made in image and video segmentation over the last years. To a large extent, the success can be attributed to the strong appearance models completely learned from data, in particular using deep learning methods. However,to perform best these methods require large representative datasets for training with expensive pixel-level annotations, which in case of videos are prohibitive to obtain. Therefore, there is a need to relax this constraint and to consider alternative forms of supervision, which are easier and cheaper to collect. In this thesis, we aim to develop algorithms for learning to segment in images and videos with different levels of supervision. First, we develop approaches for training convolutional networks with weaker forms of supervision, such as bounding boxes or image labels, for object boundary estimation and semantic/instance labelling tasks. We propose to generate pixel-level approximate groundtruth from these weaker forms of annotations to train a network, which allows to achieve high-quality results comparable to the full supervision quality without any modifications of the network architecture or the training procedure. Second, we address the problem of the excessive computational and memory costs inherent to solving video segmentation via graphs. We propose approaches to improve the runtime and memory efficiency as well as the output segmentation quality by learning from the available training data the best representation of the graph. In particular, we contribute with learning must-link constraints, the topology and edge weights of the graph as well as enhancing the graph nodes -- superpixels -- themselves. Third, we tackle the task of pixel-level object tracking and address the problem of the limited amount of densely annotated video data for training convolutional networks. We introduce an architecture which allows training with static images only and propose an elaborate data synthesis scheme which creates a large number of training examples close to the target domain from the given first frame mask. With the proposed techniques we show that densely annotated consequent video data is not necessary to achieve high-quality temporally coherent video segmentationresults. In summary, this thesis advances the state of the art in weakly supervised image segmentation, graph-based video segmentation and pixel-level object tracking and contributes with the new ways of training convolutional networks with a limited amount of pixel-level annotated training data.}, }
Endnote
%0 Thesis %A Khoreva, Anna %Y Schiele, Bernt %A referee: Szeliski, Richard %A referee: Brox, Thomas %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Learning to Segment in Images and Videos with Different Forms of Supervision : %G eng %U http://hdl.handle.net/21.11116/0000-0000-293F-D %R 10.22028/D291-26995 %U urn:nbn:de:bsz:291-scidok-ds-269954 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 247 p. %V phd %9 phd %X Much progress has been made in image and video segmentation over the last years. To a large extent, the success can be attributed to the strong appearance models completely learned from data, in particular using deep learning methods. However,to perform best these methods require large representative datasets for training with expensive pixel-level annotations, which in case of videos are prohibitive to obtain. Therefore, there is a need to relax this constraint and to consider alternative forms of supervision, which are easier and cheaper to collect. In this thesis, we aim to develop algorithms for learning to segment in images and videos with different levels of supervision. First, we develop approaches for training convolutional networks with weaker forms of supervision, such as bounding boxes or image labels, for object boundary estimation and semantic/instance labelling tasks. We propose to generate pixel-level approximate groundtruth from these weaker forms of annotations to train a network, which allows to achieve high-quality results comparable to the full supervision quality without any modifications of the network architecture or the training procedure. Second, we address the problem of the excessive computational and memory costs inherent to solving video segmentation via graphs. We propose approaches to improve the runtime and memory efficiency as well as the output segmentation quality by learning from the available training data the best representation of the graph. In particular, we contribute with learning must-link constraints, the topology and edge weights of the graph as well as enhancing the graph nodes - superpixels - themselves. Third, we tackle the task of pixel-level object tracking and address the problem of the limited amount of densely annotated video data for training convolutional networks. We introduce an architecture which allows training with static images only and propose an elaborate data synthesis scheme which creates a large number of training examples close to the target domain from the given first frame mask. With the proposed techniques we show that densely annotated consequent video data is not necessary to achieve high-quality temporally coherent video segmentationresults. In summary, this thesis advances the state of the art in weakly supervised image segmentation, graph-based video segmentation and pixel-level object tracking and contributes with the new ways of training convolutional networks with a limited amount of pixel-level annotated training data. %U http://dx.doi.org/10.22028/D291-26995
[50]
B. Kodric, “Incentives in Dynamic Markets,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{Kodric_PhD2018, TITLE = {Incentives in Dynamic Markets}, AUTHOR = {Kodric, Bojana}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-273509}, DOI = {10.22028/D291-27350}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Kodric, Bojana %Y Hoefer, Martin %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Incentives in Dynamic Markets : %G eng %U http://hdl.handle.net/21.11116/0000-0002-5C1C-9 %R 10.22028/D291-27350 %U urn:nbn:de:bsz:291-scidok-ds-273509 %F OTHER: hdl:20.500.11880/27173 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P VIII, 96 p. %V phd %9 phd %U http://dx.doi.org/10.22028/D291-27350
[51]
E. Kuzey, “Populating Knowledge bases with Temporal Information,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{KuzeyPhd2017, TITLE = {Populating Knowledge bases with Temporal Information}, AUTHOR = {Kuzey, Erdal}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Kuzey, Erdal %Y Weikum, Gerhard %A referee: de Rijke , Maarten %A referee: Suchanek, Fabian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Populating Knowledge bases with Temporal Information : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-EAE5-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P XIV, 143 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6811/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[52]
M. Lapin, “Image Classification with Limited Training Data and Class Ambiguity,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations.
Export
BibTeX
@phdthesis{Lapinphd17, TITLE = {Image Classification with Limited Training Data and Class Ambiguity}, AUTHOR = {Lapin, Maksim}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69098}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations.}, }
Endnote
%0 Thesis %A Lapin, Maksim %Y Schiele, Bernt %A referee: Hein, Matthias %A referee: Lampert, Christoph %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Image Classification with Limited Training Data and Class Ambiguity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-9345-9 %U urn:nbn:de:bsz:291-scidok-69098 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 227 p. %V phd %9 phd %X Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6909/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[53]
M. Malinowski, “Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first ‘question answering about real-world images’ dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question.
Export
BibTeX
@phdthesis{Malinowskiphd17, TITLE = {Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image}, AUTHOR = {Malinowski, Mateusz}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68978}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first {\textquoteleft}question answering about real-world images{\textquoteright} dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question.}, }
Endnote
%0 Thesis %A Malinowski, Mateusz %Y Fritz, Mario %A referee: Pinkal, Manfred %A referee: Darrell, Trevor %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-9339-5 %U urn:nbn:de:bsz:291-scidok-68978 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 276 p. %V phd %9 phd %X Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first &#8216;question answering about real-world images&#8217; dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6897/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[54]
S. Mukherjee, “Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.
Export
BibTeX
@phdthesis{Mukherjeephd17, TITLE = {Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities}, AUTHOR = {Mukherjee, Subhabrata}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69269}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.}, }
Endnote
%0 Thesis %A Mukherjee, Subhabrata %Y Weikum, Gerhard %A referee: Han, Jiawei %A referee: G&#252;nnemann, Stephan %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-A648-0 %U urn:nbn:de:bsz:291-scidok-69269 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 166 p. %V phd %9 phd %X One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6926/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[55]
F. Müller, “Analyzing DNA Methylation Signatures of Cell Identity,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Although virtually all cells in an organism share the same genome, regulatory mechanisms give rise to hundreds of different, highly specialized cell types. Understanding these mechanisms has been in the limelight of epigenomic research. It is now evident that cellular identity is inscribed in the epigenome of each individual cell. Nonetheless, the precise mechanisms by which different epigenomic marks are involved in regulating gene expression are just beginning to be unraveled. Furthermore, epigenomic patterns are highly dynamic and subject to environmental influences. Any given cell type is defined by cell populations exhibiting epigenetic heterogeneity at different levels. Characterizing this heterogeneity is paramount in understanding the regulatory role of the epigenome. Different epigenomic marks can be profiled using high-throughput sequencing, and global initiatives have started to provide a comprehensive picture of the human epigenome by assaying a multitude of marks across a broad panel of cell types and conditions. In particular, DNA methylation has been extensively studied for its gene-regulatory role in health and disease. This thesis describes computational methods and pipelines for the analysis of DNA methylation data. It provides concepts for addressing bioinformatic challenges such as the processing of large, epigenome-wide datasets and integrating multiple levels of information in an interpretable manner. We developed RnBeads, an R package that facilitates comprehensive, interpretable analysis of large-scale DNA methylation datasets at the level of single CpGs or genomic regions of interest. With the epiRepeatR pipeline, we introduced additional tools for studying global patterns of epigenomic marks in transposons and other repetitive regions of the genome. Blood-cell differentiation represents a useful model for studying trajectories of cellular differentiation. We developed and applied bioinformatic methods to dissect the DNA methylation landscape of the hematopoietic system. Here, we provide a broad outline of cell-type-specific DNA methylation signatures and phenotypic diversity reflected in the epigenomes of human mature blood cells. We also describe the DNA methylation dynamics in the process of immune memory formation in T helper cells. Moreover, we portrayed epigenetic fingerprints of defined progenitor cell types and derived computational models that were capable of accurately inferring cell identity. We used these models in order to characterize heterogeneity in progenitor cell populations, to identify DNA methylation signatures of hematopoietic differentiation and to infer the epigenomic similarities of blood cell types. Finally, by interpreting DNA methylation patterns in leukemia and derived pluripotent cells, we started to discern how epigenomic patterns are altered in disease and explored how reprogramming of these patterns could potentially be used to restore a non-malignant state. In summary, this work showcases novel methods and computational tools for the identification and interpretation of epigenetic signatures of cell identity. It provides a detailed view on the epigenomic landscape spanned by DNA methylation patterns in hematopoietic cells that enhances our understanding of epigenetic regulation in cell differentiation and disease.
Export
BibTeX
@phdthesis{muellerphd17, TITLE = {Analyzing {DNA} Methylation Signatures of Cell Identity}, AUTHOR = {M{\"u}ller, Fabian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69432}, DOI = {10.17617/2.2474737}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Although virtually all cells in an organism share the same genome, regulatory mechanisms give rise to hundreds of different, highly specialized cell types. Understanding these mechanisms has been in the limelight of epigenomic research. It is now evident that cellular identity is inscribed in the epigenome of each individual cell. Nonetheless, the precise mechanisms by which different epigenomic marks are involved in regulating gene expression are just beginning to be unraveled. Furthermore, epigenomic patterns are highly dynamic and subject to environmental influences. Any given cell type is defined by cell populations exhibiting epigenetic heterogeneity at different levels. Characterizing this heterogeneity is paramount in understanding the regulatory role of the epigenome. Different epigenomic marks can be profiled using high-throughput sequencing, and global initiatives have started to provide a comprehensive picture of the human epigenome by assaying a multitude of marks across a broad panel of cell types and conditions. In particular, DNA methylation has been extensively studied for its gene-regulatory role in health and disease. This thesis describes computational methods and pipelines for the analysis of DNA methylation data. It provides concepts for addressing bioinformatic challenges such as the processing of large, epigenome-wide datasets and integrating multiple levels of information in an interpretable manner. We developed RnBeads, an R package that facilitates comprehensive, interpretable analysis of large-scale DNA methylation datasets at the level of single CpGs or genomic regions of interest. With the epiRepeatR pipeline, we introduced additional tools for studying global patterns of epigenomic marks in transposons and other repetitive regions of the genome. Blood-cell differentiation represents a useful model for studying trajectories of cellular differentiation. We developed and applied bioinformatic methods to dissect the DNA methylation landscape of the hematopoietic system. Here, we provide a broad outline of cell-type-specific DNA methylation signatures and phenotypic diversity reflected in the epigenomes of human mature blood cells. We also describe the DNA methylation dynamics in the process of immune memory formation in T helper cells. Moreover, we portrayed epigenetic fingerprints of defined progenitor cell types and derived computational models that were capable of accurately inferring cell identity. We used these models in order to characterize heterogeneity in progenitor cell populations, to identify DNA methylation signatures of hematopoietic differentiation and to infer the epigenomic similarities of blood cell types. Finally, by interpreting DNA methylation patterns in leukemia and derived pluripotent cells, we started to discern how epigenomic patterns are altered in disease and explored how reprogramming of these patterns could potentially be used to restore a non-malignant state. In summary, this work showcases novel methods and computational tools for the identification and interpretation of epigenetic signatures of cell identity. It provides a detailed view on the epigenomic landscape spanned by DNA methylation patterns in hematopoietic cells that enhances our understanding of epigenetic regulation in cell differentiation and disease.}, }
Endnote
%0 Thesis %A M&#252;ller, Fabian %Y Lengauer, Thomas %A referee: Bock, Christoph %A referee: Brors, Benedikt %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Analyzing DNA Methylation Signatures of Cell Identity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-D9AA-6 %U urn:nbn:de:bsz:291-scidok-69432 %R 10.17617/2.2474737 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 177 p. %V phd %9 phd %X Although virtually all cells in an organism share the same genome, regulatory mechanisms give rise to hundreds of different, highly specialized cell types. Understanding these mechanisms has been in the limelight of epigenomic research. It is now evident that cellular identity is inscribed in the epigenome of each individual cell. Nonetheless, the precise mechanisms by which different epigenomic marks are involved in regulating gene expression are just beginning to be unraveled. Furthermore, epigenomic patterns are highly dynamic and subject to environmental influences. Any given cell type is defined by cell populations exhibiting epigenetic heterogeneity at different levels. Characterizing this heterogeneity is paramount in understanding the regulatory role of the epigenome. Different epigenomic marks can be profiled using high-throughput sequencing, and global initiatives have started to provide a comprehensive picture of the human epigenome by assaying a multitude of marks across a broad panel of cell types and conditions. In particular, DNA methylation has been extensively studied for its gene-regulatory role in health and disease. This thesis describes computational methods and pipelines for the analysis of DNA methylation data. It provides concepts for addressing bioinformatic challenges such as the processing of large, epigenome-wide datasets and integrating multiple levels of information in an interpretable manner. We developed RnBeads, an R package that facilitates comprehensive, interpretable analysis of large-scale DNA methylation datasets at the level of single CpGs or genomic regions of interest. With the epiRepeatR pipeline, we introduced additional tools for studying global patterns of epigenomic marks in transposons and other repetitive regions of the genome. Blood-cell differentiation represents a useful model for studying trajectories of cellular differentiation. We developed and applied bioinformatic methods to dissect the DNA methylation landscape of the hematopoietic system. Here, we provide a broad outline of cell-type-specific DNA methylation signatures and phenotypic diversity reflected in the epigenomes of human mature blood cells. We also describe the DNA methylation dynamics in the process of immune memory formation in T helper cells. Moreover, we portrayed epigenetic fingerprints of defined progenitor cell types and derived computational models that were capable of accurately inferring cell identity. We used these models in order to characterize heterogeneity in progenitor cell populations, to identify DNA methylation signatures of hematopoietic differentiation and to infer the epigenomic similarities of blood cell types. Finally, by interpreting DNA methylation patterns in leukemia and derived pluripotent cells, we started to discern how epigenomic patterns are altered in disease and explored how reprogramming of these patterns could potentially be used to restore a non-malignant state. In summary, this work showcases novel methods and computational tools for the identification and interpretation of epigenetic signatures of cell identity. It provides a detailed view on the epigenomic landscape spanned by DNA methylation patterns in hematopoietic cells that enhances our understanding of epigenetic regulation in cell differentiation and disease. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6943/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[56]
O. Nalbach, “Smarter Screen Space Shading,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
This dissertation introduces a range of new methods to produce images of virtual scenes in a matter of milliseconds. Imposing as few constraints as possible on the set of scenes that can be handled, e.g., regarding geometric changes over time or lighting conditions, precludes pre-computations and makes this a particularly difficult problem. We first present a general approach, called deep screen space, using which a variety of light transport aspects can be simulated within the aforementioned setting. This approach is then further extended to additionally handle scenes containing participating media like clouds. We also show how to improve the correctness of deep screen space and related algorithms by accounting for mutual visibility of points in a scene. After that, we take a completely different point of view on image generation using a learning-based approach to approximate a rendering function. We show that neural networks can hallucinate shading effects which otherwise have to be computed using costly analytic computations. Finally, we contribute a holistic framework to deal with phosphorescent materials in computer graphics, covering all aspects from acquisition of real materials, to easy editing, to image synthesis.
Export
BibTeX
@phdthesis{Nalbachphd2017, TITLE = {Smarter Screen Space Shading}, AUTHOR = {Nalbach, Oliver}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-269289}, DOI = {10.22028/D291-26928}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {This dissertation introduces a range of new methods to produce images of virtual scenes in a matter of milliseconds. Imposing as few constraints as possible on the set of scenes that can be handled, e.g., regarding geometric changes over time or lighting conditions, precludes pre-computations and makes this a particularly difficult problem. We first present a general approach, called deep screen space, using which a variety of light transport aspects can be simulated within the aforementioned setting. This approach is then further extended to additionally handle scenes containing participating media like clouds. We also show how to improve the correctness of deep screen space and related algorithms by accounting for mutual visibility of points in a scene. After that, we take a completely different point of view on image generation using a learning-based approach to approximate a rendering function. We show that neural networks can hallucinate shading effects which otherwise have to be computed using costly analytic computations. Finally, we contribute a holistic framework to deal with phosphorescent materials in computer graphics, covering all aspects from acquisition of real materials, to easy editing, to image synthesis.}, }
Endnote
%0 Thesis %A Nalbach, Oliver %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %A referee: Guti&#232;rrez, Diego %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Smarter Screen Space Shading : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-892C-7 %U urn:nbn:de:bsz:291-scidok-ds-269289 %R 10.22028/D291-26928 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 123 p. %V phd %9 phd %X This dissertation introduces a range of new methods to produce images of virtual scenes in a matter of milliseconds. Imposing as few constraints as possible on the set of scenes that can be handled, e.g., regarding geometric changes over time or lighting conditions, precludes pre-computations and makes this a particularly difficult problem. We first present a general approach, called deep screen space, using which a variety of light transport aspects can be simulated within the aforementioned setting. This approach is then further extended to additionally handle scenes containing participating media like clouds. We also show how to improve the correctness of deep screen space and related algorithms by accounting for mutual visibility of points in a scene. After that, we take a completely different point of view on image generation using a learning-based approach to approximate a rendering function. We show that neural networks can hallucinate shading effects which otherwise have to be computed using costly analytic computations. Finally, we contribute a holistic framework to deal with phosphorescent materials in computer graphics, covering all aspects from acquisition of real materials, to easy editing, to image synthesis. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26896
[57]
D. B. Nguyen, “Joint Models for Information and Knowledge Extraction,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Information and knowledge extraction from natural language text is a key asset for question answering, semantic search, automatic summarization, and other machine reading applications. There are many sub-tasks involved such as named entity recognition, named entity disambiguation, co-reference resolution, relation extraction, event detection, discourse parsing, and others. Solving these tasks is challenging as natural language text is unstructured, noisy, and ambiguous. Key challenges, which focus on identifying and linking named entities, as well as discovering relations between them, include: • High NERD Quality. Named entity recognition and disambiguation, NERD for short, are preformed first in the extraction pipeline. Their results may affect other downstream tasks. • Coverage vs. Quality of Relation Extraction. Model-based information extraction methods achieve high extraction quality at low coverage, whereas open information extraction methods capture relational phrases between entities. However, the latter degrades in quality by non-canonicalized and noisy output. These limitations need to be overcome. • On-the-fly Knowledge Acquisition. Real-world applications such as question answering, monitoring content streams, etc. demand on-the-fly knowledge acquisition. Building such an end-to-end system is challenging because it requires high throughput, high extraction quality, and high coverage. This dissertation addresses the above challenges, developing new methods to advance the state of the art. The first contribution is a robust model for joint inference between entity recognition and disambiguation. The second contribution is a novel model for relation extraction and entity disambiguation on Wikipediastyle text. The third contribution is an end-to-end system for constructing querydriven, on-the-fly knowledge bases.
Export
BibTeX
@phdthesis{Nguyenphd2017, TITLE = {Joint Models for Information and Knowledge Extraction}, AUTHOR = {Nguyen, Dat Ba}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-269433}, DOI = {10.22028/D291-26943}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Information and knowledge extraction from natural language text is a key asset for question answering, semantic search, automatic summarization, and other machine reading applications. There are many sub-tasks involved such as named entity recognition, named entity disambiguation, co-reference resolution, relation extraction, event detection, discourse parsing, and others. Solving these tasks is challenging as natural language text is unstructured, noisy, and ambiguous. Key challenges, which focus on identifying and linking named entities, as well as discovering relations between them, include: \mbox{$\bullet$} High NERD Quality. Named entity recognition and disambiguation, NERD for short, are preformed first in the extraction pipeline. Their results may affect other downstream tasks. \mbox{$\bullet$} Coverage vs. Quality of Relation Extraction. Model-based information extraction methods achieve high extraction quality at low coverage, whereas open information extraction methods capture relational phrases between entities. However, the latter degrades in quality by non-canonicalized and noisy output. These limitations need to be overcome. \mbox{$\bullet$} On-the-fly Knowledge Acquisition. Real-world applications such as question answering, monitoring content streams, etc. demand on-the-fly knowledge acquisition. Building such an end-to-end system is challenging because it requires high throughput, high extraction quality, and high coverage. This dissertation addresses the above challenges, developing new methods to advance the state of the art. The first contribution is a robust model for joint inference between entity recognition and disambiguation. The second contribution is a novel model for relation extraction and entity disambiguation on Wikipediastyle text. The third contribution is an end-to-end system for constructing querydriven, on-the-fly knowledge bases.}, }
Endnote
%0 Thesis %A Nguyen, Dat Ba %Y Weikum, Gerhard %A referee: Theobald, Martin %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Joint Models for Information and Knowledge Extraction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-890F-9 %U urn:nbn:de:bsz:291-scidok-ds-269433 %R 10.22028/D291-26943 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 89 p. %V phd %9 phd %X Information and knowledge extraction from natural language text is a key asset for question answering, semantic search, automatic summarization, and other machine reading applications. There are many sub-tasks involved such as named entity recognition, named entity disambiguation, co-reference resolution, relation extraction, event detection, discourse parsing, and others. Solving these tasks is challenging as natural language text is unstructured, noisy, and ambiguous. Key challenges, which focus on identifying and linking named entities, as well as discovering relations between them, include: &#8226; High NERD Quality. Named entity recognition and disambiguation, NERD for short, are preformed first in the extraction pipeline. Their results may affect other downstream tasks. &#8226; Coverage vs. Quality of Relation Extraction. Model-based information extraction methods achieve high extraction quality at low coverage, whereas open information extraction methods capture relational phrases between entities. However, the latter degrades in quality by non-canonicalized and noisy output. These limitations need to be overcome. &#8226; On-the-fly Knowledge Acquisition. Real-world applications such as question answering, monitoring content streams, etc. demand on-the-fly knowledge acquisition. Building such an end-to-end system is challenging because it requires high throughput, high extraction quality, and high coverage. This dissertation addresses the above challenges, developing new methods to advance the state of the art. The first contribution is a robust model for joint inference between entity recognition and disambiguation. The second contribution is a novel model for relation extraction and entity disambiguation on Wikipediastyle text. The third contribution is an end-to-end system for constructing querydriven, on-the-fly knowledge bases. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26895
[58]
A. Rohrbach, “Generation and Grounding of Natural Language Descriptions for Visual Data,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.
Export
BibTeX
@phdthesis{Rohrbachphd17, TITLE = {Generation and Grounding of Natural Language Descriptions for Visual Data}, AUTHOR = {Rohrbach, Anna}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68749}, DOI = {10.22028/D291-26708}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.}, }
Endnote
%0 Thesis %A Rohrbach, Anna %Y Schiele, Bernt %A referee: Demberg, Vera %A referee: Darrell, Trevor %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Generation and Grounding of Natural Language Descriptions for Visual Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-57D4-E %R 10.22028/D291-26708 %U urn:nbn:de:bsz:291-scidok-68749 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %8 02.06.2017 %P X, 215 p. %V phd %9 phd %X Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6874/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[59]
A. Siu, “Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Entity recognition and disambiguation (ERD) for the biomedical domain are notoriously difficult problems due to the variety of entities and their often long names in many variations. Existing works focus heavily on the molecular level in two ways. First, they target scientific literature as the input text genre. Second, they target single, highly specialized entity types such as chemicals, genes, and proteins. However, a wealth of biomedical information is also buried in the vast universe of Web content. In order to fully utilize all the information available, there is a need to tap into Web content as an additional input. Moreover, there is a need to cater for other entity types such as symptoms and risk factors since Web content focuses on consumer health. The goal of this thesis is to investigate ERD methods that are applicable to all entity types in scientific literature as well as Web content. In addition, we focus on under-explored aspects of the biomedical ERD problems -- scalability, long noun phrases, and out-of-knowledge base (OOKB) entities. This thesis makes four main contributions, all of which leverage knowledge in UMLS (Unified Medical Language System), the largest and most authoritative knowledge base (KB) of the biomedical domain. The first contribution is a fast dictionary lookup method for entity recognition that maximizes throughput while balancing the loss of precision and recall. The second contribution is a semantic type classification method targeting common words in long noun phrases. We develop a custom set of semantic types to capture word usages; besides biomedical usage, these types also cope with non-biomedical usage and the case of generic, non-informative usage. The third contribution is a fast heuristics method for entity disambiguation in MEDLINE abstracts, again maximizing throughput but this time maintaining accuracy. The fourth contribution is a corpus-driven entity disambiguation method that addresses OOKB entities. The method first captures the entities expressed in a corpus as latent representations that comprise in-KB and OOKB entities alike before performing entity disambiguation.
Export
BibTeX
@phdthesis{siuphd17, TITLE = {Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text}, AUTHOR = {Siu, Amy}, LANGUAGE = {eng}, DOI = {10.22028/D291-26790}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Entity recognition and disambiguation (ERD) for the biomedical domain are notoriously difficult problems due to the variety of entities and their often long names in many variations. Existing works focus heavily on the molecular level in two ways. First, they target scientific literature as the input text genre. Second, they target single, highly specialized entity types such as chemicals, genes, and proteins. However, a wealth of biomedical information is also buried in the vast universe of Web content. In order to fully utilize all the information available, there is a need to tap into Web content as an additional input. Moreover, there is a need to cater for other entity types such as symptoms and risk factors since Web content focuses on consumer health. The goal of this thesis is to investigate ERD methods that are applicable to all entity types in scientific literature as well as Web content. In addition, we focus on under-explored aspects of the biomedical ERD problems -- scalability, long noun phrases, and out-of-knowledge base (OOKB) entities. This thesis makes four main contributions, all of which leverage knowledge in UMLS (Unified Medical Language System), the largest and most authoritative knowledge base (KB) of the biomedical domain. The first contribution is a fast dictionary lookup method for entity recognition that maximizes throughput while balancing the loss of precision and recall. The second contribution is a semantic type classification method targeting common words in long noun phrases. We develop a custom set of semantic types to capture word usages; besides biomedical usage, these types also cope with non-biomedical usage and the case of generic, non-informative usage. The third contribution is a fast heuristics method for entity disambiguation in MEDLINE abstracts, again maximizing throughput but this time maintaining accuracy. The fourth contribution is a corpus-driven entity disambiguation method that addresses OOKB entities. The method first captures the entities expressed in a corpus as latent representations that comprise in-KB and OOKB entities alike before performing entity disambiguation.}, }
Endnote
%0 Thesis %A Siu, Amy %Y Weikum, Gerhard %A referee: Berberich, Klaus %A referee: Leser, Ulf %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-DD18-E %R 10.22028/D291-26790 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 169 p. %V phd %9 phd %X Entity recognition and disambiguation (ERD) for the biomedical domain are notoriously difficult problems due to the variety of entities and their often long names in many variations. Existing works focus heavily on the molecular level in two ways. First, they target scientific literature as the input text genre. Second, they target single, highly specialized entity types such as chemicals, genes, and proteins. However, a wealth of biomedical information is also buried in the vast universe of Web content. In order to fully utilize all the information available, there is a need to tap into Web content as an additional input. Moreover, there is a need to cater for other entity types such as symptoms and risk factors since Web content focuses on consumer health. The goal of this thesis is to investigate ERD methods that are applicable to all entity types in scientific literature as well as Web content. In addition, we focus on under-explored aspects of the biomedical ERD problems -- scalability, long noun phrases, and out-of-knowledge base (OOKB) entities. This thesis makes four main contributions, all of which leverage knowledge in UMLS (Unified Medical Language System), the largest and most authoritative knowledge base (KB) of the biomedical domain. The first contribution is a fast dictionary lookup method for entity recognition that maximizes throughput while balancing the loss of precision and recall. The second contribution is a semantic type classification method targeting common words in long noun phrases. We develop a custom set of semantic types to capture word usages; besides biomedical usage, these types also cope with non-biomedical usage and the case of generic, non-informative usage. The third contribution is a fast heuristics method for entity disambiguation in MEDLINE abstracts, again maximizing throughput but this time maintaining accuracy. The fourth contribution is a corpus-driven entity disambiguation method that addresses OOKB entities. The method first captures the entities expressed in a corpus as latent representations that comprise in-KB and OOKB entities alike before performing entity disambiguation. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26803
[60]
P. Sun, “Bi-(N-) cluster editing and its biomedical applications,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.
Export
BibTeX
@phdthesis{Sunphd17, TITLE = {Bi-(N-) cluster editing and its biomedical applications}, AUTHOR = {Sun, Peng}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69309}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.}, }
Endnote
%0 Thesis %A Sun, Peng %Y Baumbach, Jan %A referee: Guo, Jiong %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Bi-(N-) cluster editing and its biomedical applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-A65E-F %U urn:nbn:de:bsz:291-scidok-69309 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 192 p. %V phd %9 phd %X he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6930/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[61]
C. H. Tang, “Logics for Rule-based Configuration Systems,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis. This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule- based configuration systems.
Export
BibTeX
@phdthesis{Tangphd2017, TITLE = {Logics for Rule-based Configuration Systems}, AUTHOR = {Tang, Ching Hoo}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69639}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis. This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule- based configuration systems.}, }
Endnote
%0 Thesis %A Tang, Ching Hoo %Y Weidenbach, Christoph %A referee: Herzig, Andreas %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Logics for Rule-based Configuration Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-0871-7 %U urn:nbn:de:bsz:291-scidok-69639 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P X, 123 p. %V phd %9 phd %X Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis. This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule- based configuration systems. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6963/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[62]
S. Tang, “People detection and tracking in crowded scenes,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
People are often a central element of visual scenes, particularly in real-world street scenes. Thus it has been a long-standing goal in Computer Vision to develop methods aiming at analyzing humans in visual data. Due to the complexity of real-world scenes, visual understanding of people remains challenging for machine perception. In this thesis we focus on advancing the techniques for people detection and tracking in crowded street scenes. We also propose new models for human pose estimation and motion segmentation in realistic images and videos. First, we propose detection models that are jointly trained to detect single person as well as pairs of people under varying degrees of occlusion. The learning algorithm of our joint detector facilitates a tight integration of tracking and detection, because it is designed to address common failure cases during tracking due to long-term inter-object occlusions. Second, we propose novel multi person tracking models that formulate tracking as a graph partitioning problem. Our models jointly cluster detection hypotheses in space and time, eliminating the need for a heuristic non-maximum suppression. Furthermore, for crowded scenes, our tracking model encodes long-range person re-identification information into the detection clustering process in a unified and rigorous manner. Third, we explore the visual tracking task in different granularity. We present a tracking model that simultaneously clusters object bounding boxes and pixel level trajectories over time. This approach provides a rich understanding of the motion of objects in the scene. Last, we extend our tracking model for the multi person pose estimation task. We introduce a joint subset partitioning and labelling model where we simultaneously estimate the poses of all the people in the scene. In summary, this thesis addresses a number of diverse tasks that aim to enable vision systems to analyze people in realistic images and videos. In particular, the thesis proposes several novel ideas and rigorous mathematical formulations, pushes the boundary of state-of-the-arts and results in superior performance.
Export
BibTeX
@phdthesis{tangphd2017, TITLE = {People detection and tracking in crowded scenes}, AUTHOR = {Tang, Siyu}, LANGUAGE = {eng}, DOI = {10.22028/D291-26793}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {People are often a central element of visual scenes, particularly in real-world street scenes. Thus it has been a long-standing goal in Computer Vision to develop methods aiming at analyzing humans in visual data. Due to the complexity of real-world scenes, visual understanding of people remains challenging for machine perception. In this thesis we focus on advancing the techniques for people detection and tracking in crowded street scenes. We also propose new models for human pose estimation and motion segmentation in realistic images and videos. First, we propose detection models that are jointly trained to detect single person as well as pairs of people under varying degrees of occlusion. The learning algorithm of our joint detector facilitates a tight integration of tracking and detection, because it is designed to address common failure cases during tracking due to long-term inter-object occlusions. Second, we propose novel multi person tracking models that formulate tracking as a graph partitioning problem. Our models jointly cluster detection hypotheses in space and time, eliminating the need for a heuristic non-maximum suppression. Furthermore, for crowded scenes, our tracking model encodes long-range person re-identification information into the detection clustering process in a unified and rigorous manner. Third, we explore the visual tracking task in different granularity. We present a tracking model that simultaneously clusters object bounding boxes and pixel level trajectories over time. This approach provides a rich understanding of the motion of objects in the scene. Last, we extend our tracking model for the multi person pose estimation task. We introduce a joint subset partitioning and labelling model where we simultaneously estimate the poses of all the people in the scene. In summary, this thesis addresses a number of diverse tasks that aim to enable vision systems to analyze people in realistic images and videos. In particular, the thesis proposes several novel ideas and rigorous mathematical formulations, pushes the boundary of state-of-the-arts and results in superior performance.}, }
Endnote
%0 Thesis %A Tang, Siyu %Y Schiele, Bernt %A referee: Black, Michael %A referee: Gool, Luc van %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T People detection and tracking in crowded scenes : %G eng %U http://hdl.handle.net/21.11116/0000-0001-8E59-C %R 10.22028/D291-26793 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 171 p. %V phd %9 phd %X People are often a central element of visual scenes, particularly in real-world street scenes. Thus it has been a long-standing goal in Computer Vision to develop methods aiming at analyzing humans in visual data. Due to the complexity of real-world scenes, visual understanding of people remains challenging for machine perception. In this thesis we focus on advancing the techniques for people detection and tracking in crowded street scenes. We also propose new models for human pose estimation and motion segmentation in realistic images and videos. First, we propose detection models that are jointly trained to detect single person as well as pairs of people under varying degrees of occlusion. The learning algorithm of our joint detector facilitates a tight integration of tracking and detection, because it is designed to address common failure cases during tracking due to long-term inter-object occlusions. Second, we propose novel multi person tracking models that formulate tracking as a graph partitioning problem. Our models jointly cluster detection hypotheses in space and time, eliminating the need for a heuristic non-maximum suppression. Furthermore, for crowded scenes, our tracking model encodes long-range person re-identification information into the detection clustering process in a unified and rigorous manner. Third, we explore the visual tracking task in different granularity. We present a tracking model that simultaneously clusters object bounding boxes and pixel level trajectories over time. This approach provides a rich understanding of the motion of objects in the scene. Last, we extend our tracking model for the multi person pose estimation task. We introduce a joint subset partitioning and labelling model where we simultaneously estimate the poses of all the people in the scene. In summary, this thesis addresses a number of diverse tasks that aim to enable vision systems to analyze people in realistic images and videos. In particular, the thesis proposes several novel ideas and rigorous mathematical formulations, pushes the boundary of state-of-the-arts and results in superior performance. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26806
[63]
D. Wand, “Superposition: Types and Induction,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs. This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results.
Export
BibTeX
@phdthesis{wandphd2017, TITLE = {Superposition: Types and Induction}, AUTHOR = {Wand, Daniel}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69522}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs. This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results.}, }
Endnote
%0 Thesis %A Wand, Daniel %Y Weidenbach, Christoph %A referee: Blanchette, Jasmin Christian %A referee: Sutcliffe, Geoff %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Superposition: Types and Induction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-E99C-5 %U urn:nbn:de:bsz:291-scidok-69522 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P x, 167 p. %V phd %9 phd %X Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs. This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6952/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[64]
M. Weigel, “Interactive On-Skin Devices for Expressive Touch-based Interactions,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.
Export
BibTeX
@phdthesis{Weigelphd17, TITLE = {Interactive On-Skin Devices for Expressive Touch-based Interactions}, AUTHOR = {Weigel, Martin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68857}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.}, }
Endnote
%0 Thesis %A Weigel, Martin %Y Steimle, J&#252;rgen %A referee: Olwal, Alex %A referee: Kr&#252;ger, Antonio %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Interactive On-Skin Devices for Expressive Touch-based Interactions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-904F-D %U urn:nbn:de:bsz:291-scidok-68857 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 153 p. %V phd %9 phd %X Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6885/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[65]
X. Wu, “Structure-aware Content Creation,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.
Export
BibTeX
@phdthesis{wuphd2017, TITLE = {Structure-aware Content Creation}, AUTHOR = {Wu, Xiaokun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67750}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.}, }
Endnote
%0 Thesis %A Wu, Xiaokun %Y Seidel, Hans-Peter %A referee: Wand, Michael %A referee: Hildebrandt, Klaus %A referee: Klein, Reinhard %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Structure-aware Content Creation : Detection, Retargeting and Deformation %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8072-6 %U urn:nbn:de:bsz:291-scidok-67750 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P viii, 61 p. %V phd %9 phd %X Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields. In this thesis, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, \ie maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns. We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6775/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2016
[66]
N. Azmy, “A Machine-checked Proof of Correctness of Pastry,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols
Export
BibTeX
@phdthesis{Azmyphd16, TITLE = {A Machine-checked Proof of Correctness of Pastry}, AUTHOR = {Azmy, Noran}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67309}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols}, }
Endnote
%0 Thesis %A Azmy, Noran %Y Weidenbach, Christoph %A referee: Merz, Stephan %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T A Machine-checked Proof of Correctness of Pastry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-3BAD-9 %U urn:nbn:de:bsz:291-scidok-67309 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P ix, 119 p. %V phd %9 phd %X A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6730/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[67]
M. Bachynskyi, “Biomechanical Models for Human-Computer Interaction,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.
Export
BibTeX
@phdthesis{Bachyphd16, TITLE = {Biomechanical Models for Human-Computer Interaction}, AUTHOR = {Bachynskyi, Myroslav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66888}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.}, }
Endnote
%0 Thesis %A Bachynskyi, Myroslav %Y Steimle, J&#252;rgen %A referee: Schmidt, Albrecht %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Biomechanical Models for Human-Computer Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-0FD4-9 %U urn:nbn:de:bsz:291-scidok-66888 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xiv, 206 p. %V phd %9 phd %X Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity. %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6688/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[68]
W.-C. Chiu, “Bayesian Non-Parametrics for Multi-Modal Segmentation,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{walonPhDThesis2016, TITLE = {Bayesian Non-Parametrics for Multi-Modal Segmentation}, AUTHOR = {Chiu, Wei-Chen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66378}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Chiu, Wei-Chen %Y Fritz, Mario %A referee: Demberg, Vera %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Bayesian Non-Parametrics for Multi-Modal Segmentation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-788A-F %U urn:nbn:de:bsz:291-scidok-66378 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XII, 155 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6637/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[69]
L. Del Corro, “Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{delcorrophd15, TITLE = {Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text}, AUTHOR = {Del Corro, Luciano}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Del Corro, Luciano %Y Gemulla, Rainer %A referee: Ponzetto, Simone Paolo %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-B3DB-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xiv, 101 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6346/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[70]
N. T. Doncheva, “Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{DonchevaPhD2016, TITLE = {Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins}, AUTHOR = {Doncheva, Nadezhda Tsankova}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65957}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Doncheva, Nadezhda Tsankova %Y Albrecht, Mario %A referee: Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-1921-A %U urn:nbn:de:bsz:291-scidok-65957 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XII, 242 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6595/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[71]
O. Elek, “Efficient Methods for Physically-based Rendering of Participating Media,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{ElekPhD2016, TITLE = {Efficient Methods for Physically-based Rendering of Participating Media}, AUTHOR = {Elek, Oskar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65357}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Elek, Oskar %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %A referee: Dachsbacher, Karsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Efficient Methods for Physically-based Rendering of Participating Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F94D-E %U urn:nbn:de:bsz:291-scidok-65357 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6535/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[72]
H. Hatefi Ardakani, “Finite Horizon Analysis of Markov Automata,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.
Export
BibTeX
@phdthesis{Hatefiphd17, TITLE = {Finite Horizon Analysis of {M}arkov Automata}, AUTHOR = {Hatefi Ardakani, Hassan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67438}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.}, }
Endnote
%0 Thesis %A Hatefi Ardakani, Hassan %Y Hermanns, Holger %A referee: Buchholz, Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Finite Horizon Analysis of Markov Automata : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-9E81-C %U urn:nbn:de:bsz:291-scidok-67438 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P X, 175 p. %V phd %9 phd %X Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6743/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[73]
A.-C. Hauschild, “Computational Methods for Breath Metabolomics in Clinical Diagnostics,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Hauschild_PhD2016, TITLE = {Computational Methods for Breath Metabolomics in Clinical Diagnostics}, AUTHOR = {Hauschild, Anne-Christin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65874}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Hauschild, Anne-Christin %Y Helms, Volkhard %A referee: Baumbach, Jan %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Computational Methods for Breath Metabolomics in Clinical Diagnostics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0C18-7 %U urn:nbn:de:bsz:291-scidok-65874 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P 188 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6587/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[74]
P. Kellnhofer, “Perceptual Modeling for Stereoscopic 3D,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.
Export
BibTeX
@phdthesis{Kellnhoferphd2016, TITLE = {Perceptual Modeling for Stereoscopic {3D}}, AUTHOR = {Kellnhofer, Petr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66813}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.}, }
Endnote
%0 Thesis %A Kellnhofer, Petr %Y Myszkowski, Karol %A referee: Seidel, Hans-Peter %A referee: Masia, Belen %A referee: Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Perceptual Modeling for Stereoscopic 3D : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-BBA6-1 %U urn:nbn:de:bsz:291-scidok-66813 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xxiv, 158 p. %V phd %9 phd %X Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort. %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6681/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[75]
O. Klehm, “User-Guided Scene Stylization using Efficient Rendering Techniques,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Klehmphd2016, TITLE = {User-Guided Scene Stylization using Efficient Rendering Techniques}, AUTHOR = {Klehm, Oliver}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65321}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Klehm, Oliver %Y Seidel, Hans-Peter %A referee: Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T User-Guided Scene Stylization using Efficient Rendering Techniques : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-9C13-A %U urn:nbn:de:bsz:291-scidok-65321 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XIII, 111 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6532/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[76]
M. Košta, “New Concepts for Real Quantifier Elimination by Virtual Substitution,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Kostaphd16, TITLE = {New Concepts for Real Quantifier Elimination by Virtual Substitution}, AUTHOR = {Ko{\v s}ta, Marek}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Ko&#353;ta, Marek %Y Sturm, Thomas %A referee: Weber, Andreas %A referee: Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations Automation of Logic, MPI for Informatics, Max Planck Society %T New Concepts for Real Quantifier Elimination by Virtual Substitution : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-30A8-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xvi, 214 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6716/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[77]
M. Künnemann, “Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Kuennemannphd2016, TITLE = {Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting}, AUTHOR = {K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65991}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A K&#252;nnemann, Marvin %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %A referee: Welzl, Emo %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-423A-3 %U urn:nbn:de:bsz:291-scidok-65991 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XI, 223 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6599/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[78]
S. Ott, “Algorithms for Classical and Modern Scheduling Problems,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Ott_PhD2016, TITLE = {Algorithms for Classical and Modern Scheduling Problems}, AUTHOR = {Ott, Sebastian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65763}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Ott, Sebastian %Y Mehlhorn, Kurt %A referee: Huang, Chien-Chung %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Algorithms for Classical and Modern Scheduling Problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0C1B-1 %U urn:nbn:de:bsz:291-scidok-65763 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P IX, 109 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6576/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[79]
A. Pironti, “Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Pirontiphd16, TITLE = {Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies}, AUTHOR = {Pironti, Alejandro}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67190}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Pironti, Alejandro %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-30D5-5 %U urn:nbn:de:bsz:291-scidok-67190 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P x, 272 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6719/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[80]
L. Pishchulin, “Articulated People Detection and Pose Estimation in Challenging Real World Environments,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{PishchulinPhD2016, TITLE = {Articulated People Detection and Pose Estimation in Challenging Real World Environments}, AUTHOR = {Pishchulin, Leonid}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65478}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Pishchulin, Leonid %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Articulated People Detection and Pose Estimation in Challenging Real World Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F000-B %U urn:nbn:de:bsz:291-scidok-65478 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XIII, 248 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6547/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[81]
S. S. Rangapuram, “Graph-based Methods for Unsupervised and Semi-supervised Data Analysis,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{rangphd17, TITLE = {Graph-based Methods for Unsupervised and Semi-supervised Data Analysis}, AUTHOR = {Rangapuram, Syama Sundar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66590}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Rangapuram, Syama Sundar %Y Hein, Matthias %A referee: Hoai An, Le Thi %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Graph-based Methods for Unsupervised and Semi-supervised Data Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-9EA4-D %U urn:nbn:de:bsz:291-scidok-66590 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XI, 161 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6659/
[82]
B. Reinert, “Interactive, Example-driven Synthesis and Manipulation of Visual Media,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Reinertbphd17, TITLE = {Interactive, Example-driven Synthesis and Manipulation of Visual Media}, AUTHOR = {Reinert, Bernhard}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67660}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Reinert, Bernhard %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive, Example-driven Synthesis and Manipulation of Visual Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5A03-B %U urn:nbn:de:bsz:291-scidok-67660 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XX, 116, XVII p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6766/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[83]
H. Rhodin, “From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{RhodinPhD2016, TITLE = {From Motion Capture to Interactive Virtual Worlds: {T}owards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animatio}, AUTHOR = {Rhodin, Helge}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67413}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Rhodin, Helge %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %A referee: Bregler, Christoph %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6310-C %U urn:nbn:de:bsz:291-scidok-67413 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P 179 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6741/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[84]
S. Sridhar, “Tracking Hands in Action for Gesture-based Computer Input,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{SridharPhD2016, TITLE = {Tracking Hands in Action for Gesture-based Computer Input}, AUTHOR = {Sridhar, Srinath}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67712}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Sridhar, Srinath %Y Theobalt, Christian %A referee: Oulasvirta, Antti %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Tracking Hands in Action for Gesture-based Computer Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-631C-3 %U urn:nbn:de:bsz:291-scidok-67712 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XXIII, 161 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6771/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[85]
N. Tandon, “Commonsense Knowledge Acquisition and Applications,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{TandonPhD2016, TITLE = {Commonsense Knowledge Acquisition and Applications}, AUTHOR = {Tandon, Niket}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66291}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Tandon, Niket %Y Weikum, Gerhard %A referee: Lieberman, Henry %A referee: Vreeken, Jilles %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Commonsense Knowledge Acquisition and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-78F6-A %U urn:nbn:de:bsz:291-scidok-66291 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XIV, 154 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6629/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[86]
C. Teflioudi, “Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Teflioudiphd2016, TITLE = {Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search}, AUTHOR = {Teflioudi, Christina}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-64699}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Teflioudi, Christina %Y Gemulla, Rainer %A referee: Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-43FA-2 %U urn:nbn:de:bsz:291-scidok-64699 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xi, 110 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6469/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[87]
K. Templin, “Depth, Shading, and Stylization in Stereoscopic Cinematograph,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Templinphd15, TITLE = {Depth, Shading, and Stylization in Stereoscopic Cinematograph}, AUTHOR = {Templin, Krzysztof}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-64390}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Templin, Krzysztof %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Depth, Shading, and Stylization in Stereoscopic Cinematograph : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-19FA-2 %U urn:nbn:de:bsz:291-scidok-64390 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xii, 100 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6439/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[88]
B. Turoňová, “Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{TuronovaPhD2016, TITLE = {Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography}, AUTHOR = {Turo{\v n}ov{\'a}, Beata}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66400}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Turo&#328;ov&#225;, Beata %Y Slusallek, Philipp %A referee: Louis, Alfred K. %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-7898-F %U urn:nbn:de:bsz:291-scidok-66400 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XI, 118 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6640/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[89]
M. Yahya, “Question Answering and Query Processing for Extended Knowledge Graphs,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{yahyaphd2016, TITLE = {Question Answering and Query Processing for Extended Knowledge Graphs}, AUTHOR = {Yahya, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, DATE = {2016}, }
Endnote
%0 Thesis %A Yahya, Mohamed %Y Weikum, Gerhard %A referee: Sch&#252;tze, Hinrich %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Question Answering and Query Processing for Extended Knowledge Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-48C2-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P x, 160 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6476/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2015
[90]
M. Abdel Maksoud, “Processor Pipelines in WCET Analysis,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Abdelphd15, TITLE = {Processor Pipelines in {WCET} Analysis}, AUTHOR = {Abdel Maksoud, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Abdel Maksoud, Mohamed %Y Wilhelm, Reinhard %A referee: Reineke, Jan %A referee: Falk, Heiko %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Processor Pipelines in WCET Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E5D-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 73 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6128/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[91]
F. Abed, “Coordinating Selfish Players in Scheduling Games,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{AbedPhd15, TITLE = {Coordinating Selfish Players in Scheduling Games}, AUTHOR = {Abed, Fidaa}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Abed, Fidaa %Y Mehlhorn, Kurt %A referee: Megow, Nicole %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Coordinating Selfish Players in Scheduling Games : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-4BBB-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 70 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6234/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[92]
A. Elhayek, “Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{ElhayekPhd15, TITLE = {Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups}, AUTHOR = {Elhayek, Ahmed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Elhayek, Ahmed %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Marker-less Motion Capture in General Scenes with Sparse Multi-camera Setups : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A0-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XIV, 124 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6325/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[93]
I. Georgiev, “Path Sampling Techniques for Efficient Light Transport Simulation,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Georgievphd15, TITLE = {Path Sampling Techniques for Efficient Light Transport Simulation}, AUTHOR = {Georgiev, Iliyan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Georgiev, Iliyan %Y Slussalek, Philipp %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Path Sampling Techniques for Efficient Light Transport Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E59-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 162 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6152/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[94]
W. Hagemann, “Symbolic Orthogonal Projections: A New Polyhedral Representation for Reachability Analysis of Hybrid Systems,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{HagemannPhd15, TITLE = {Symbolic Orthogonal Projections: A New Polyhedral Representation for Reachability Analysis of Hybrid Systems}, AUTHOR = {Hagemann, Willem}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Hagemann, Willem %Y Weidenbach, Christoph %A referee: Fr&#228;nzle, Martin %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Symbolic Orthogonal Projections: A New Polyhedral Representation for Reachability Analysis of Hybrid Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-26AA-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XIII, 94 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6304/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[95]
J. Hoffart, “Discovering and Disambiguating Named Entities in Text,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Hoffartthesis, TITLE = {Discovering and Disambiguating Named Entities in Text}, AUTHOR = {Hoffart, Johannes}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Hoffart, Johannes %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Discovering and Disambiguating Named Entities in Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-6C44-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P X, 103 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6022/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[96]
R. Ibragimov, “Exact and Heuristic Algorithms for Network Alignment using Graph Edit Distance Models,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Ibragimovphd14, TITLE = {Exact and Heuristic Algorithms for Network Alignment using Graph Edit Distance Models}, AUTHOR = {Ibragimov, Rashid}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Ibragimov, Rashid %Y Baumbach, Jan %A referee: Guo, Jiong %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Exact and Heuristic Algorithms for Network Alignment using Graph Edit Distance Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E4C-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 149 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/5999/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[97]
M. Lamotte-Schubert, “Automatic Authorization Analysis,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{LamottePhd15, TITLE = {Automatic Authorization Analysis}, AUTHOR = {Lamotte-Schubert, Manuel}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-62575}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Lamotte-Schubert, Manuel %Y Weidenbach, Christoph %A referee: Baumgartner, Peter %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Automatic Authorization Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FD0B-7 %U urn:nbn:de:bsz:291-scidok-62575 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 118 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6257/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[98]
A. Neumann, “On Efficiency and Reliability in Computer Science,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{NeumannPhd15, TITLE = {On Efficiency and Reliability in Computer Science}, AUTHOR = {Neumann, Adrian}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Neumann, Adrian %Y Mehlhorn, Kurt %A referee: Wiese, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Efficiency and Reliability in Computer Science : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FC6A-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 95 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6268/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[99]
C. Nguyen, “Data-driven Approaches for Interactive Appearance Editing,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{NguyenPhD2015, TITLE = {Data-driven Approaches for Interactive Appearance Editing}, AUTHOR = {Nguyen, Chuong}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-62372}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Nguyen, Chuong %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Data-driven Approaches for Interactive Appearance Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-9C47-9 %U urn:nbn:de:bsz:291-scidok-62372 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XVII, 134 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6237/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[100]
S. Olberding, “Fabricating Custom-shaped Thin-film Interactive Surfaces,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{OlberdingPhD2015, TITLE = {Fabricating Custom-shaped Thin-film Interactive Surfaces}, AUTHOR = {Olberding, Simon}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-63285}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Olberding, Simon %Y Steimle, J&#252;rgen %A referee: Kr&#252;ger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Fabricating Custom-shaped Thin-film Interactive Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-5EF8-2 %U urn:nbn:de:bsz:291-scidok-63285 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XVI, 145 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6328/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[101]
B. Pepik, “Richer Object Representations for Object Class Detection in Challenging Real World Image,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Pepikphd15, TITLE = {Richer Object Representations for Object Class Detection in Challenging Real World Image}, AUTHOR = {Pepik, Bojan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Pepik, Bojan %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Richer Object Representations for Object Class Detection in Challenging Real World Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-7678-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P xii, 219 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6361/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[102]
A. Pourmiri, “Random Walk-based Algorithms on Networks,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Pourmiriphd15, TITLE = {Random Walk-based Algorithms on Networks}, AUTHOR = {Pourmiri, Ali}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Pourmiri, Ali %Y Mehlhorn, Kurt %A referee: Sauerwald, Thomas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Random Walk-based Algorithms on Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E73-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 112 S. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6186/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[103]
F. Ramezani, “Application of Multiplicative Weights Update Method in Algorithmic Game Theory,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{RamezaniPHD2015, TITLE = {Application of Multiplicative Weights Update Method in Algorithmic Game Theory}, AUTHOR = {Ramezani, Fahimeh}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Ramezani, Fahimeh %Y Mehlhorn, Kurt %A referee: Elbassioni, Khaled %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Application of Multiplicative Weights Update Method in Algorithmic Game Theory : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-4BB9-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 85 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6226/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[104]
C. Rizkallah, “Verification of Program Computations,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{RizkallahPhd15, TITLE = {Verification of Program Computations}, AUTHOR = {Rizkallah, Christine}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Rizkallah, Christine %Y Mehlhorn, Kurt %A referee: Nipkow, Tobias %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Verification of Program Computations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FD10-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 132 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6254/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[105]
S. Seufert, “Algorithmic Building Blocks for Relationship Analysis over Large Graphs,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Seufertphd15, TITLE = {Algorithmic Building Blocks for Relationship Analysis over Large Graphs}, AUTHOR = {Seufert, Stephan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Seufert, Stephan %Y Bedathur, Srikanta %A referee: Barbosa, Denilson %A referee: Weidenbach, Christoph %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Algorithmic Building Blocks for Relationship Analysis over Large Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-6E65-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 198 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6183/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[106]
M. Suda, “Resolution-based Methods for Linear Temporal Reasoning,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{SudaPhd15, TITLE = {Resolution-based Methods for Linear Temporal Reasoning}, AUTHOR = {Suda, Martin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-62747}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Suda, Martin %Y Weidenbach, Christoph %A referee: Hoffmann, J&#246;rg %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Resolution-based Methods for Linear Temporal Reasoning : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FC90-3 %U urn:nbn:de:bsz:291-scidok-62747 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 233 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6274/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[107]
T. Tylenda, “Methods and Tools for Summarization of Entities and Facts in Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{TylendaPhd15, TITLE = {Methods and Tools for Summarization of Entities and Facts in Knowledge Bases}, AUTHOR = {Tylenda, Tomasz}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Tylenda, Tomasz %Y Weikum, Gerhard %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Methods and Tools for Summarization of Entities and Facts in Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-FC65-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 113 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6263/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[108]
Z. Wang, “Pattern Search for the Visualization of Scalar, Vector, and Line Fields,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{WangPhd15, TITLE = {Pattern Search for the Visualization of Scalar, Vector, and Line Fields}, AUTHOR = {Wang, Zhongjie}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Wang, Zhongjie %Y Seidel, Hans-Peter %A referee: Weinkauf, Tino %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Pattern Search for the Visualization of Scalar, Vector, and Line Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-48A5-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 103 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/6330/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[109]
M. A. Yosef, “U-AIDA: A Customizable System for Named Entity Recognition, Classification, and Disambiguation,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@phdthesis{Yosefphd15, TITLE = {U-{AIDA}: A Customizable System for Named Entity Recognition, Classification, and Disambiguation}, AUTHOR = {Yosef, Mohamed Amir}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, DATE = {2015}, }
Endnote
%0 Thesis %A Yosef, Mohamed Amir %Y Weikum, Gerhard %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T U-AIDA: A Customizable System for Named Entity Recognition, Classification, and Disambiguation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-B9B9-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P XV, 101 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6370/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2014
[110]
F. Alvanaki, “Mining Interesting Events on Large and Dynamic Data,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Alvanakithesis, TITLE = {Mining Interesting Events on Large and Dynamic Data}, AUTHOR = {Alvanaki, Foteini}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Alvanaki, Foteini %Y Michel, Sebastian %A referee: Weikum, Gerhard %A referee: Delis, Alexis %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Mining Interesting Events on Large and Dynamic Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-6C4E-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 128 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/5985/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[111]
Y. Assenov, “Identification and Prioritization of Genomic Loci with Disease-specific Methylation,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{AssenovPhD2014, TITLE = {Identification and Prioritization of Genomic Loci with Disease-specific Methylation}, AUTHOR = {Assenov, Yassen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58865}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Assenov, Yassen %Y Lengauer, Thomas %A referee: Bock, Christoph %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Identification and Prioritization of Genomic Loci with Disease-specific Methylation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-E49E-D %U urn:nbn:de:bsz:291-scidok-58865 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P IX, 142 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5886/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[112]
B. Beggel, “Determining and Utilizing the Quasispecies of the Hepatitis B Virus in Clinical Applications,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Beggeltheses2014, TITLE = {Determining and Utilizing the Quasispecies of the Hepatitis {B} Virus in Clinical Applications}, AUTHOR = {Beggel, Bastian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58317}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Beggel, Bastian %Y Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Determining and Utilizing the Quasispecies of the Hepatitis B Virus in Clinical Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5DFB-A %U urn:nbn:de:bsz:291-scidok-58317 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 138 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5831/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[113]
H. Blankenburg, “Computational Methods for Integrating and Analyzing Human Systems Biology Data,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Blankenburg2014, TITLE = {Computational Methods for Integrating and Analyzing Human Systems Biology Data}, AUTHOR = {Blankenburg, Hagen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59329}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Blankenburg, Hagen %Y Albrecht, Mario %A referee: Helms, Volkhard %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Computational Methods for Integrating and Analyzing Human Systems Biology Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9671-3 %U urn:nbn:de:bsz:291-scidok-59329 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 181 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5932/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[114]
K. Bringmann, “Sampling from Discrete Distributions and Computing Fréchet Distances,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{BringmannPhD2014, TITLE = {Sampling from Discrete Distributions and Computing {F}r{\'e}chet Distances}, AUTHOR = {Bringmann, Karl}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Bringmann, Karl %Y Mehlhorn, Kurt %A referee: Steger, Angelika %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sampling from Discrete Distributions and Computing Fr&#233;chet Distances : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9ACC-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 174 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2015/5988/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[115]
M. Dietzen, “Modeling Protein Interactions in Protein Binding Sites and Oligomeric Protein Complexes,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{DietzenPhD2014, TITLE = {Modeling Protein Interactions in Protein Binding Sites and Oligomeric Protein Complexes}, AUTHOR = {Dietzen, Matthias}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59402}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Dietzen, Matthias %Y Lengauer, Thomas %A referee: Hildebrandt, Andreas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Modeling Protein Interactions in Protein Binding Sites and Oligomeric Protein Complexes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-E4A2-1 %U urn:nbn:de:bsz:291-scidok-59402 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P XVIII, 259 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5940/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[116]
R. Dimitrova, “Synthesis and Control of Infinite-state Systems with Partial Observability,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Complex computer systems play an important role in every part of everyday life and their correctness is often vital to human safety. In light of the recent advances in the area of formal methods and the increasing availability and maturity of tools and techniques, the use of verification techniques to show that a system satisfies a specified property is about to become an integral part of the development process. To minimize the development costs, formal methods must be applied as early as possible, before the entire system is fully developed, or even at the stage when only its specification is available. The goal of synthesis is to automatically construct an implementation guaranteed to fulfill the provided specification, and, if no implementation exists, to report that the given requirements cannot be realized. When synthesizing an individual component within a system and its external environment, the synthesis procedure must take into account the component�s interface and deliver implementations that comply with it. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component�s environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. This thesis presents novel synthesis methods that respect the given interface limitations of the synthesized system components and are applicable to infinite-state models. The studied computational model is that of infinite-state two-player games under incomplete information. The contributions are structured into three parts, corresponding to a classification of such games according to the interface between the synthesized component and its environment. In the first part, we obtain decidability results for a class of game structures where the player corresponding to the synthesized component has a given finite set of possible observations and a finite set of possible actions. A prominent type of systems for which the interface of a component naturally defines a finite set of observations are Lossy Channel Systems. We provide symbolic game solving and strategy synthesis algorithms for lossy channel games under incomplete information with safety and reachability winning conditions. Our second contribution is a counterexample-guided abstraction refinement scheme for solving infinite-state under incomplete information in which the actions available to the component are still finitely many, but no finite set of possible observations is given. This situation is common, for example, in the synthesis of mutex protocols or robot controllers. In this setting, the observations correspond to observation predicates, which are logical formulas, and their computation is an integral part of our synthesis procedure. The resulting game solving method is applicable to games that are out of the scope of other available techniques. Last we study systems in which, in addition to the possibly infinite set of observation predicates, the component can choose between infinitely many possible actions. Timed games under incomplete information are a fundamental class of games for which this is the case. We extend the abstraction-refinement procedure to develop the first systematic method for the synthesis of observation predicates for timed control. Automatically refining the set of candidate observations based on counterexamples demonstrates better potential than brute-force enumeration of observation sets, in particular for systems where fine granularity of the observations is necessary.
Export
BibTeX
@phdthesis{Dimitrova2014, TITLE = {Synthesis and Control of Infinite-state Systems with Partial Observability}, AUTHOR = {Dimitrova, Rayna}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Complex computer systems play an important role in every part of everyday life and their correctness is often vital to human safety. In light of the recent advances in the area of formal methods and the increasing availability and maturity of tools and techniques, the use of verification techniques to show that a system satisfies a specified property is about to become an integral part of the development process. To minimize the development costs, formal methods must be applied as early as possible, before the entire system is fully developed, or even at the stage when only its specification is available. The goal of synthesis is to automatically construct an implementation guaranteed to fulfill the provided specification, and, if no implementation exists, to report that the given requirements cannot be realized. When synthesizing an individual component within a system and its external environment, the synthesis procedure must take into account the component{\diamond}s interface and deliver implementations that comply with it. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component{\diamond}s environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. This thesis presents novel synthesis methods that respect the given interface limitations of the synthesized system components and are applicable to infinite-state models. The studied computational model is that of infinite-state two-player games under incomplete information. The contributions are structured into three parts, corresponding to a classification of such games according to the interface between the synthesized component and its environment. In the first part, we obtain decidability results for a class of game structures where the player corresponding to the synthesized component has a given finite set of possible observations and a finite set of possible actions. A prominent type of systems for which the interface of a component naturally defines a finite set of observations are Lossy Channel Systems. We provide symbolic game solving and strategy synthesis algorithms for lossy channel games under incomplete information with safety and reachability winning conditions. Our second contribution is a counterexample-guided abstraction refinement scheme for solving infinite-state under incomplete information in which the actions available to the component are still finitely many, but no finite set of possible observations is given. This situation is common, for example, in the synthesis of mutex protocols or robot controllers. In this setting, the observations correspond to observation predicates, which are logical formulas, and their computation is an integral part of our synthesis procedure. The resulting game solving method is applicable to games that are out of the scope of other available techniques. Last we study systems in which, in addition to the possibly infinite set of observation predicates, the component can choose between infinitely many possible actions. Timed games under incomplete information are a fundamental class of games for which this is the case. We extend the abstraction-refinement procedure to develop the first systematic method for the synthesis of observation predicates for timed control. Automatically refining the set of candidate observations based on counterexamples demonstrates better potential than brute-force enumeration of observation sets, in particular for systems where fine granularity of the observations is necessary.}, }
Endnote
%0 Thesis %A Dimitrova, Rayna %Y Finkbeiner, Bernd %A referee: Majumdar, Rupak %+ Group R. Majumdar, Max Planck Institute for Software Systems, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Group R. Majumdar, Max Planck Institute for Software Systems, Max Planck Society %T Synthesis and Control of Infinite-state Systems with Partial Observability : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-C94F-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 216 p. %V phd %9 phd %X Complex computer systems play an important role in every part of everyday life and their correctness is often vital to human safety. In light of the recent advances in the area of formal methods and the increasing availability and maturity of tools and techniques, the use of verification techniques to show that a system satisfies a specified property is about to become an integral part of the development process. To minimize the development costs, formal methods must be applied as early as possible, before the entire system is fully developed, or even at the stage when only its specification is available. The goal of synthesis is to automatically construct an implementation guaranteed to fulfill the provided specification, and, if no implementation exists, to report that the given requirements cannot be realized. When synthesizing an individual component within a system and its external environment, the synthesis procedure must take into account the component&#65533;s interface and deliver implementations that comply with it. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component&#65533;s environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. This thesis presents novel synthesis methods that respect the given interface limitations of the synthesized system components and are applicable to infinite-state models. The studied computational model is that of infinite-state two-player games under incomplete information. The contributions are structured into three parts, corresponding to a classification of such games according to the interface between the synthesized component and its environment. In the first part, we obtain decidability results for a class of game structures where the player corresponding to the synthesized component has a given finite set of possible observations and a finite set of possible actions. A prominent type of systems for which the interface of a component naturally defines a finite set of observations are Lossy Channel Systems. We provide symbolic game solving and strategy synthesis algorithms for lossy channel games under incomplete information with safety and reachability winning conditions. Our second contribution is a counterexample-guided abstraction refinement scheme for solving infinite-state under incomplete information in which the actions available to the component are still finitely many, but no finite set of possible observations is given. This situation is common, for example, in the synthesis of mutex protocols or robot controllers. In this setting, the observations correspond to observation predicates, which are logical formulas, and their computation is an integral part of our synthesis procedure. The resulting game solving method is applicable to games that are out of the scope of other available techniques. Last we study systems in which, in addition to the possibly infinite set of observation predicates, the component can choose between infinitely many possible actions. Timed games under incomplete information are a fundamental class of games for which this is the case. We extend the abstraction-refinement procedure to develop the first systematic method for the synthesis of observation predicates for timed control. Automatically refining the set of candidate observations based on counterexamples demonstrates better potential than brute-force enumeration of observation sets, in particular for systems where fine granularity of the observations is necessary. %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5946/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[117]
M. Dylla, “Efficient Querying and Learning in Probabilistic and Temporal Databases,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Probabilistic databases store, query, and manage large amounts of uncertain information. This thesis advances the state-of-the-art in probabilistic databases in three different ways: 1. We present a closed and complete data model for temporal probabilistic databases and analyze its complexity. Queries are posed via temporal deduction rules which induce lineage formulas capturing both time and uncertainty. 2. We devise a methodology for computing the top-k most probable query answers. It is based on first-order lineage formulas representing sets of answer candidates. Theoretically derived probability bounds on these formulas enable pruning low-probability answers. 3. We introduce the problem of learning tuple probabilities which allows updating and cleaning of probabilistic databases. We study its complexity, characterize its solutions, cast it into an optimization problem, and devise an approximation algorithm based on stochastic gradient descent. All of the above contributions support consistency constraints and are evaluated experimentally.
Export
BibTeX
@phdthesis{DyllaPhDThesis2014, TITLE = {Efficient Querying and Learning in Probabilistic and Temporal Databases}, AUTHOR = {Dylla, Maximilian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58146}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Probabilistic databases store, query, and manage large amounts of uncertain information. This thesis advances the state-of-the-art in probabilistic databases in three different ways: 1. We present a closed and complete data model for temporal probabilistic databases and analyze its complexity. Queries are posed via temporal deduction rules which induce lineage formulas capturing both time and uncertainty. 2. We devise a methodology for computing the top-k most probable query answers. It is based on first-order lineage formulas representing sets of answer candidates. Theoretically derived probability bounds on these formulas enable pruning low-probability answers. 3. We introduce the problem of learning tuple probabilities which allows updating and cleaning of probabilistic databases. We study its complexity, characterize its solutions, cast it into an optimization problem, and devise an approximation algorithm based on stochastic gradient descent. All of the above contributions support consistency constraints and are evaluated experimentally.}, }
Endnote
%0 Thesis %A Dylla, Maximilian %Y Weikum, Gerhard %A referee: Theobald, Martin %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient Querying and Learning in Probabilistic and Temporal Databases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-3C44-E %U urn:nbn:de:bsz:291-scidok-58146 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P VIII, 169 p. %V phd %9 phd %X Probabilistic databases store, query, and manage large amounts of uncertain information. This thesis advances the state-of-the-art in probabilistic databases in three different ways: 1. We present a closed and complete data model for temporal probabilistic databases and analyze its complexity. Queries are posed via temporal deduction rules which induce lineage formulas capturing both time and uncertainty. 2. We devise a methodology for computing the top-k most probable query answers. It is based on first-order lineage formulas representing sets of answer candidates. Theoretically derived probability bounds on these formulas enable pruning low-probability answers. 3. We introduce the problem of learning tuple probabilities which allows updating and cleaning of probabilistic databases. We study its complexity, characterize its solutions, cast it into an optimization problem, and devise an approximation algorithm based on stochastic gradient descent. All of the above contributions support consistency constraints and are evaluated experimentally. %K Deduction Rules, Probabilistic Database, Temporal Database, Learning, Constraints, Top-k %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5814/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[118]
L. Feuerbach, “Evolutionary Epigenomics - Identifying Functional Genome Elements by Epigenetic Footprints in the DNA,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Feuerbach2014, TITLE = {Evolutionary Epigenomics -- Identifying Functional Genome Elements by Epigenetic Footprints in the {DNA}}, AUTHOR = {Feuerbach, Lars}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Feuerbach, Lars %Y Lengauer, Thomas %A referee: Jotun, Hein %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Evolutionary Epigenomics - Identifying Functional Genome Elements by Epigenetic Footprints in the DNA : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9676-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 205 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5888/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[119]
A. Fietzke, “Labelled Superposition,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Fietzke2014, TITLE = {Labelled Superposition}, AUTHOR = {Fietzke, Arnaud}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Fietzke, Arnaud %Y Weidenbach, Christoph %A referee: Hermanns, Holger %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Labelled Superposition : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96A6-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 176 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5825/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[120]
S. Gerling, “Plugging in Trust and Privacy : Three Systems to Improve Widely used Ecosystems,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Gerling2014, TITLE = {Plugging in Trust and Privacy : Three Systems to Improve Widely used Ecosystems}, AUTHOR = {Gerling, Sebastian}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Gerling, Sebastian %Y Backes, Michael %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Plugging in Trust and Privacy : Three Systems to Improve Widely used Ecosystems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-DD50-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 157 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5961/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[121]
J. Günther, “Ray Tracing of Dynamic Scenes,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{GuentherPhD2014, TITLE = {Ray Tracing of Dynamic Scenes}, AUTHOR = {G{\"u}nther, Johannes}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59295}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A G&#252;nther, Johannes %Y Slusallek, Philipp %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Ray Tracing of Dynamic Scenes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C0-5 %U urn:nbn:de:bsz:291-scidok-59295 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5929/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[122]
K. Halachev, “Exploratory Visualizations and Statistical Analysis of Large, Heterogeneous Epigenetic Datasets,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Halachev2014, TITLE = {Exploratory Visualizations and Statistical Analysis of Large, Heterogeneous Epigenetic Datasets}, AUTHOR = {Halachev, Konstantin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Halachev, Konstantin %Y Lengauer, Thomas %A referee: Bock, Christoph %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Exploratory Visualizations and Statistical Analysis of Large, Heterogeneous Epigenetic Datasets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96A8-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 163 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5911/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[123]
A. Jain, “Data-driven Methods for Interactive Visual Content Creation and Manipulation,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{PhDThesis:JainArjun, TITLE = {Data-driven Methods for Interactive Visual Content Creation and Manipulation}, AUTHOR = {Jain, Arjun}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58210}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Jain, Arjun %Y Thorm&#228;hlen, Thorsten %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Data-driven Methods for Interactive Visual Content Creation and Manipulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0023-EB82-2 %U urn:nbn:de:bsz:291-scidok-58210 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P XV, 82 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5821/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[124]
M. Khosla, “Multiple Choice Allocations with Small Maximum Loads,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
The idea of using multiple choices to improve allocation schemes is now well understood and is often illustrated by the following example. Suppose n balls are allocated to n bins with each ball choosing a bin independently and uniformly at random. The \emphmaximum load}, or the number of balls in the most loaded bin, will then be approximately \log n \over \log \log n with high probability. Suppose now the balls are allocated sequentially by placing a ball in the least loaded bin among the k≥ 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this scenario, the maximum load drops to {\log \log n \over \log k} +\Theta(1), with high probability, which is an exponential improvement over the previous case. In this thesis we investigate multiple choice allocations from a slightly different perspective. Instead of minimizing the maximum load, we fix the bin capacities and focus on maximizing the number of balls that can be allocated without overloading any bin. In the process that we consider we have m=\lfloor cn \rfloor balls and n bins. Each ball chooses k bins independently and uniformly at random. \emph{Is it possible to assign each ball to one of its choices such that the no bin receives more than ℓ balls?} For all k≥ 3 and ℓ≥ 2 we give a critical value, c_{k,ℓ}^*, such that when cc_{k,ℓ}^* this is not the case. In case such an allocation exists, \emph{how quickly can we find it?} Previous work on total allocation time for case k≥ 3 and ℓ=1 has analyzed a \emph{breadth first strategy} which is shown to be linear only in expectation. We give a simple and efficient algorithm which we also call \emph{local search allocation}(LSA) to find an allocation for all k≥ 3 and ℓ=1. Provided the number of balls are below (but arbitrarily close to) the theoretical achievable load threshold, we give a \emph{linear bound for the total allocation time that holds with high probability. We demonstrate, through simulations, an order of magnitude improvement for total and maximum allocation times when compared to the state of the art method. Our results find applications in many areas including hashing, load balancing, data management, orientability of random hypergraphs and maximum matchings in a special class of bipartite graphs.
Export
BibTeX
@phdthesis{Khosla2014, TITLE = {Multiple Choice Allocations with Small Maximum Loads}, AUTHOR = {Khosla, Megha}, LANGUAGE = {enc}, URL = {urn:nbn:de:bsz:291-scidok-56957}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {The idea of using multiple choices to improve allocation schemes is now well understood and is often illustrated by the following example. Suppose n balls are allocated to n bins with each ball choosing a bin independently and uniformly at random. The \emphmaximum load}, or the number of balls in the most loaded bin, will then be approximately \log n \over \log \log n with high probability. Suppose now the balls are allocated sequentially by placing a ball in the least loaded bin among the k$\geq$ 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this scenario, the maximum load drops to {\log \log n \over \log k} +\Theta(1), with high probability, which is an exponential improvement over the previous case. In this thesis we investigate multiple choice allocations from a slightly different perspective. Instead of minimizing the maximum load, we fix the bin capacities and focus on maximizing the number of balls that can be allocated without overloading any bin. In the process that we consider we have m=\lfloor cn \rfloor balls and n bins. Each ball chooses k bins independently and uniformly at random. \emph{Is it possible to assign each ball to one of its choices such that the no bin receives more than $\ell$ balls?} For all k$\geq$ 3 and $\ell$$\geq$ 2 we give a critical value, c_{k,$\ell$}^*, such that when cc_{k,$\ell$}^* this is not the case. In case such an allocation exists, \emph{how quickly can we find it?} Previous work on total allocation time for case k$\geq$ 3 and $\ell$=1 has analyzed a \emph{breadth first strategy} which is shown to be linear only in expectation. We give a simple and efficient algorithm which we also call \emph{local search allocation}(LSA) to find an allocation for all k$\geq$ 3 and $\ell$=1. Provided the number of balls are below (but arbitrarily close to) the theoretical achievable load threshold, we give a \emph{linear bound for the total allocation time that holds with high probability. We demonstrate, through simulations, an order of magnitude improvement for total and maximum allocation times when compared to the state of the art method. Our results find applications in many areas including hashing, load balancing, data management, orientability of random hypergraphs and maximum matchings in a special class of bipartite graphs.}, }
Endnote
%0 Thesis %A Khosla, Megha %Y Mehlhorn, Kurt %A referee: Panagiotou, Konstantinos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Multiple Choice Allocations with Small Maximum Loads : %G enc %U http://hdl.handle.net/11858/00-001M-0000-0019-836A-A %U urn:nbn:de:bsz:291-scidok-56957 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 63 p. %V phd %9 phd %X The idea of using multiple choices to improve allocation schemes is now well understood and is often illustrated by the following example. Suppose n balls are allocated to n bins with each ball choosing a bin independently and uniformly at random. The \emphmaximum load}, or the number of balls in the most loaded bin, will then be approximately \log n \over \log \log n with high probability. Suppose now the balls are allocated sequentially by placing a ball in the least loaded bin among the k&#8805; 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this scenario, the maximum load drops to {\log \log n \over \log k} +\Theta(1), with high probability, which is an exponential improvement over the previous case. In this thesis we investigate multiple choice allocations from a slightly different perspective. Instead of minimizing the maximum load, we fix the bin capacities and focus on maximizing the number of balls that can be allocated without overloading any bin. In the process that we consider we have m=\lfloor cn \rfloor balls and n bins. Each ball chooses k bins independently and uniformly at random. \emph{Is it possible to assign each ball to one of its choices such that the no bin receives more than &#8467; balls?} For all k&#8805; 3 and &#8467;&#8805; 2 we give a critical value, c_{k,&#8467;}^*, such that when cc_{k,&#8467;}^* this is not the case. In case such an allocation exists, \emph{how quickly can we find it?} Previous work on total allocation time for case k&#8805; 3 and &#8467;=1 has analyzed a \emph{breadth first strategy} which is shown to be linear only in expectation. We give a simple and efficient algorithm which we also call \emph{local search allocation}(LSA) to find an allocation for all k&#8805; 3 and &#8467;=1. Provided the number of balls are below (but arbitrarily close to) the theoretical achievable load threshold, we give a \emph{linear bound for the total allocation time that holds with high probability. We demonstrate, through simulations, an order of magnitude improvement for total and maximum allocation times when compared to the state of the art method. Our results find applications in many areas including hashing, load balancing, data management, orientability of random hypergraphs and maximum matchings in a special class of bipartite graphs. %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5695/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[125]
C. Klein, “Matrix Rounding, Evolutionary Algorithms, and Hole Detection,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{KleinChristianPhD2014, TITLE = {Matrix Rounding, Evolutionary Algorithms, and Hole Detection}, AUTHOR = {Klein, Christian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59164}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Klein, Christian %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Matrix Rounding, Evolutionary Algorithms, and Hole Detection : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0025-069A-F %U urn:nbn:de:bsz:291-scidok-59164 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P VI, 126 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5916/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[126]
S. K. Kondreddi, “Human Computing and Crowdsourcing Methods for Knowledge Acquisition,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Ambiguity, complexity, and diversity in natural language textual expressions are major hindrances to automated knowledge extraction. As a result state-of-the-art methods for extracting entities and relationships from unstructured data make incorrect extractions or produce noise. With the advent of human computing, computationally hard tasks have been addressed through human inputs. While text-based knowledge acquisition can benefit from this approach, humans alone cannot bear the burden of extracting knowledge from the vast textual resources that exist today. Even making payments for crowdsourced acquisition can quickly become prohibitively expensive. In this thesis we present principled methods that effectively garner human computing inputs for improving the extraction of knowledge-base facts from natural language texts. Our methods complement automatic extraction techniques with human computing to reap the benefits of both while overcoming each other�s limitations. We present the architecture and implementation of HIGGINS, a system that combines an information extraction (IE) engine with a human computing (HC) engine to produce high quality facts. The IE engine combines statistics derived from large Web corpora with semantic resources like WordNet and ConceptNet to construct a large dictionary of entity and relational phrases. It employs specifically designed statistical language models for phrase relatedness to come up with questions and relevant candidate answers that are presented to human workers. Through extensive experiments we establish the superiority of this approach in extracting relation-centric facts from text. In our experiments we extract facts about fictitious characters in narrative text, where the issues of diversity and complexity in expressing relations are far more pronounced. Finally, we also demonstrate how interesting human computing games can be designed for knowledge acquisition tasks.
Export
BibTeX
@phdthesis{Kondreddi2014b, TITLE = {Human Computing and Crowdsourcing Methods for Knowledge Acquisition}, AUTHOR = {Kondreddi, Sarath Kumar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-57948}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Ambiguity, complexity, and diversity in natural language textual expressions are major hindrances to automated knowledge extraction. As a result state-of-the-art methods for extracting entities and relationships from unstructured data make incorrect extractions or produce noise. With the advent of human computing, computationally hard tasks have been addressed through human inputs. While text-based knowledge acquisition can benefit from this approach, humans alone cannot bear the burden of extracting knowledge from the vast textual resources that exist today. Even making payments for crowdsourced acquisition can quickly become prohibitively expensive. In this thesis we present principled methods that effectively garner human computing inputs for improving the extraction of knowledge-base facts from natural language texts. Our methods complement automatic extraction techniques with human computing to reap the benefits of both while overcoming each other{\diamond}s limitations. We present the architecture and implementation of HIGGINS, a system that combines an information extraction (IE) engine with a human computing (HC) engine to produce high quality facts. The IE engine combines statistics derived from large Web corpora with semantic resources like WordNet and ConceptNet to construct a large dictionary of entity and relational phrases. It employs specifically designed statistical language models for phrase relatedness to come up with questions and relevant candidate answers that are presented to human workers. Through extensive experiments we establish the superiority of this approach in extracting relation-centric facts from text. In our experiments we extract facts about fictitious characters in narrative text, where the issues of diversity and complexity in expressing relations are far more pronounced. Finally, we also demonstrate how interesting human computing games can be designed for knowledge acquisition tasks.}, }
Endnote
%0 Thesis %A Kondreddi, Sarath Kumar %Y Triantafillou, Peter %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Human Computing and Crowdsourcing Methods for Knowledge Acquisition : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-3C3D-F %U urn:nbn:de:bsz:291-scidok-57948 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 116 p. %V phd %9 phd %X Ambiguity, complexity, and diversity in natural language textual expressions are major hindrances to automated knowledge extraction. As a result state-of-the-art methods for extracting entities and relationships from unstructured data make incorrect extractions or produce noise. With the advent of human computing, computationally hard tasks have been addressed through human inputs. While text-based knowledge acquisition can benefit from this approach, humans alone cannot bear the burden of extracting knowledge from the vast textual resources that exist today. Even making payments for crowdsourced acquisition can quickly become prohibitively expensive. In this thesis we present principled methods that effectively garner human computing inputs for improving the extraction of knowledge-base facts from natural language texts. Our methods complement automatic extraction techniques with human computing to reap the benefits of both while overcoming each other&#65533;s limitations. We present the architecture and implementation of HIGGINS, a system that combines an information extraction (IE) engine with a human computing (HC) engine to produce high quality facts. The IE engine combines statistics derived from large Web corpora with semantic resources like WordNet and ConceptNet to construct a large dictionary of entity and relational phrases. It employs specifically designed statistical language models for phrase relatedness to come up with questions and relevant candidate answers that are presented to human workers. Through extensive experiments we establish the superiority of this approach in extracting relation-centric facts from text. In our experiments we extract facts about fictitious characters in narrative text, where the issues of diversity and complexity in expressing relations are far more pronounced. Finally, we also demonstrate how interesting human computing games can be designed for knowledge acquisition tasks. %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5794/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[127]
C. Kurz, “Constrained Camera Motion Estimation and 3D Reconstruction,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{KurzPhD2014, TITLE = {Constrained Camera Motion Estimation and {3D} Reconstruction}, AUTHOR = {Kurz, Christian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59439}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Kurz, Christian %Y Seidel, Hans-Peter %A referee: Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Constrained Camera Motion Estimation and 3D Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C2-1 %U urn:nbn:de:bsz:291-scidok-59439 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5943/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[128]
F. Makari Manshadi, “Scalable Optimization Algorithms for Recommender Systems,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{MakariManshadi2014, TITLE = {Scalable Optimization Algorithms for Recommender Systems}, AUTHOR = {Makari Manshadi, Faraz}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Makari Manshadi, Faraz %Y Gemulla, Rainer %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Scalable Optimization Algorithms for Recommender Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96AA-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 121 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5922/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[129]
S. Metzger, “User-centric Knowledge Extraction and Maintenance,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Metzger2014, TITLE = {User-centric Knowledge Extraction and Maintenance}, AUTHOR = {Metzger, Steffen}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Metzger, Steffen %Y Schenkel, Ralf %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T User-centric Knowledge Extraction and Maintenance : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96AE-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 230 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5763/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[130]
I. Reshetouski, “Kaleidoscopic Imaging,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{ReshetouskiPhD2014, TITLE = {Kaleidoscopic Imaging}, AUTHOR = {Reshetouski, Ilya}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-59308}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Reshetouski, Ilya %Y Seidel, Hans-Peter %A referee: Vetterli, Martin %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Kaleidoscopic Imaging : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-54C4-E %U urn:nbn:de:bsz:291-scidok-59308 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5930/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[131]
M. Rohrbach, “Combining Visual Recognition and Computational Linguistics : Linguistic Knowledge for Visual Recognition and Natural Language Descriptions of Visual Content,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Rohrbach14, TITLE = {Combining Visual Recognition and Computational Linguistics : Linguistic Knowledge for Visual Recognition and Natural Language Descriptions of Visual Content}, AUTHOR = {Rohrbach, Marcus}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-57580}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Rohrbach, Marcus %Y Schiele, Bernt %A referee: Pinkal, Manfred %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Combining Visual Recognition and Computational Linguistics : Linguistic Knowledge for Visual Recognition and Natural Language Descriptions of Visual Content : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-850F-A %U urn:nbn:de:bsz:291-scidok-57580 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P X, 195 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2014/5758/
[132]
R. Röttger, “Active Transitivity Clustering of Large-scale Biomedical Datasets,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Roettger2014, TITLE = {Active Transitivity Clustering of Large-scale Biomedical Datasets}, AUTHOR = {R{\"o}ttger, Richard}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A R&#246;ttger, Richard %A referee: Lengauer, Thomas %Y Baumbach, Jan %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Active Transitivity Clustering of Large-scale Biomedical Datasets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96BE-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 215 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5809/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[133]
S. E. Schelhorn, “Going Viral : an Integrated View on Virological Data Analysis from Basic Research to Clinical Applications,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{Schelhorn2014, TITLE = {Going Viral : an Integrated View on Virological Data Analysis from Basic Research to Clinical Applications}, AUTHOR = {Schelhorn, Sven Eric}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Schelhorn, Sven Eric %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Going Viral : an Integrated View on Virological Data Analysis from Basic Research to Clinical Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-96C0-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 323 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5724/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[134]
C. Wu, “Inverse Rendering for Scene Reconstruction in General Environments,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@phdthesis{WuPhD2014, TITLE = {Inverse Rendering for Scene Reconstruction in General Environments}, AUTHOR = {Wu, Chenglei}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-58326}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Wu, Chenglei %A referee: Seidel, Hans-Peter %Y Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Inverse Rendering for Scene Reconstruction in General Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-34B7-6 %U urn:nbn:de:bsz:291-scidok-58326 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P XVI, 184 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2014/5832/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2013
[135]
A. Anand, “Indexing Methods for Web Archives,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large text repositories. Web archives are such continuously growing text collections which contain versions of documents spanning over long time periods. Web archives present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for tools which can efficiently access and search them. In this work, we are interested in indexing methods for supporting text-search workloads over web archives like time-travel queries and phrase queries. To this end we make the following contributions: Time-travel queries are keyword queries with a temporal predicate, e.g., mpii saarland @ [06/2009], which return versions of documents in the past. We introduce a novel index organization strategy, called index sharding, for efficiently supporting time-travel queries without incurring additional index-size blowup. We also propose index-maintenance approaches which scale to such continuously growing collections. We develop query-optimization techniques for time-travel queries called partition selection which maximizes recall at any given query-execution stage. We propose indexing methods to support phrase queries, e.g., to be or not to be that is the question. We index multi-word sequences and devise novel queryoptimization methods over the indexed sequences to efficiently answer phrase queries. We demonstrate the superior performance of our approaches over existing methods by extensive experimentation on real-world web archives.
Export
BibTeX
@phdthesis{Anand2013, TITLE = {Indexing Methods for Web Archives}, AUTHOR = {Anand, Avishek}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large text repositories. Web archives are such continuously growing text collections which contain versions of documents spanning over long time periods. Web archives present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for tools which can efficiently access and search them. In this work, we are interested in indexing methods for supporting text-search workloads over web archives like time-travel queries and phrase queries. To this end we make the following contributions: Time-travel queries are keyword queries with a temporal predicate, e.g., mpii saarland @ [06/2009], which return versions of documents in the past. We introduce a novel index organization strategy, called index sharding, for efficiently supporting time-travel queries without incurring additional index-size blowup. We also propose index-maintenance approaches which scale to such continuously growing collections. We develop query-optimization techniques for time-travel queries called partition selection which maximizes recall at any given query-execution stage. We propose indexing methods to support phrase queries, e.g., to be or not to be that is the question. We index multi-word sequences and devise novel queryoptimization methods over the indexed sequences to efficiently answer phrase queries. We demonstrate the superior performance of our approaches over existing methods by extensive experimentation on real-world web archives.}, }
Endnote
%0 Thesis %A Anand, Avishek %Y Berberich, Klaus %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Indexing Methods for Web Archives : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CB4B-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large text repositories. Web archives are such continuously growing text collections which contain versions of documents spanning over long time periods. Web archives present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for tools which can efficiently access and search them. In this work, we are interested in indexing methods for supporting text-search workloads over web archives like time-travel queries and phrase queries. To this end we make the following contributions: Time-travel queries are keyword queries with a temporal predicate, e.g., mpii saarland @ [06/2009], which return versions of documents in the past. We introduce a novel index organization strategy, called index sharding, for efficiently supporting time-travel queries without incurring additional index-size blowup. We also propose index-maintenance approaches which scale to such continuously growing collections. We develop query-optimization techniques for time-travel queries called partition selection which maximizes recall at any given query-execution stage. We propose indexing methods to support phrase queries, e.g., to be or not to be that is the question. We index multi-word sequences and devise novel queryoptimization methods over the indexed sequences to efficiently answer phrase queries. We demonstrate the superior performance of our approaches over existing methods by extensive experimentation on real-world web archives. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5531/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[136]
O. Ciobotaru, “Rational Cryptography: Novel Constructions, Automated Verification and Unified Definitions,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Rational cryptography has recently emerged as a very promising field of research by combining notions and techniques from cryptography and game theory, because it offers an alternative to the rather inexible traditional cryptographic model. In contrast to the classical view of cryptography where protocol participants are considered either honest or arbitrarily malicious, rational cryptography models participants as rational players that try to maximize their benefit and thus deviate from the protocol only if they gain an advantage by doing so. The main research goals for rational cryptography are the design of more ecient protocols when players adhere to a rational model, the design and implementation of automated proofs for rational security notions and the study of the intrinsic connections between game theoretic and cryptographic notions. In this thesis, we address all these issues. First we present the mathematical model and the design for a new rational file sharing protocol which we call RatFish. Next, we develop a general method for automated verification for rational cryptographic protocols and we show how to apply our technique in order to automatically derive the rational security property for RatFish. Finally, we study the intrinsic connections between game theory and cryptography by defining a new game theoretic notion, which we call game universal implementation, and by showing its equivalence with the notion of weak stand-alone security.
Export
BibTeX
@phdthesis{Ciobotaru2013, TITLE = {Rational Cryptography: Novel Constructions, Automated Verification and Unified Definitions}, AUTHOR = {Ciobotaru, Oana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Rational cryptography has recently emerged as a very promising field of research by combining notions and techniques from cryptography and game theory, because it offers an alternative to the rather inexible traditional cryptographic model. In contrast to the classical view of cryptography where protocol participants are considered either honest or arbitrarily malicious, rational cryptography models participants as rational players that try to maximize their benefit and thus deviate from the protocol only if they gain an advantage by doing so. The main research goals for rational cryptography are the design of more ecient protocols when players adhere to a rational model, the design and implementation of automated proofs for rational security notions and the study of the intrinsic connections between game theoretic and cryptographic notions. In this thesis, we address all these issues. First we present the mathematical model and the design for a new rational file sharing protocol which we call RatFish. Next, we develop a general method for automated verification for rational cryptographic protocols and we show how to apply our technique in order to automatically derive the rational security property for RatFish. Finally, we study the intrinsic connections between game theory and cryptography by defining a new game theoretic notion, which we call game universal implementation, and by showing its equivalence with the notion of weak stand-alone security.}, }
Endnote
%0 Thesis %A Ciobotaru, Oana %Y Backes, Michael %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Rational Cryptography: Novel Constructions, Automated Verification and Unified Definitions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CB58-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X Rational cryptography has recently emerged as a very promising field of research by combining notions and techniques from cryptography and game theory, because it offers an alternative to the rather inexible traditional cryptographic model. In contrast to the classical view of cryptography where protocol participants are considered either honest or arbitrarily malicious, rational cryptography models participants as rational players that try to maximize their benefit and thus deviate from the protocol only if they gain an advantage by doing so. The main research goals for rational cryptography are the design of more ecient protocols when players adhere to a rational model, the design and implementation of automated proofs for rational security notions and the study of the intrinsic connections between game theoretic and cryptographic notions. In this thesis, we address all these issues. First we present the mathematical model and the design for a new rational file sharing protocol which we call RatFish. Next, we develop a general method for automated verification for rational cryptographic protocols and we show how to apply our technique in order to automatically derive the rational security property for RatFish. Finally, we study the intrinsic connections between game theory and cryptography by defining a new game theoretic notion, which we call game universal implementation, and by showing its equivalence with the notion of weak stand-alone security. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5392/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[137]
M. A. Granados Velásquez, “Advanced Editing Methods for Image and Video Sequences,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos.
Export
BibTeX
@phdthesis{GranadosThesis2013, TITLE = {Advanced Editing Methods for Image and Video Sequences}, AUTHOR = {Granados Vel{\'a}squez, Miguel Andr{\'e}s}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55021}, LOCALID = {Local-ID: 2D353EDEDC2BDA47C1257BEA0053CCB8-GranadosThesis2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos.}, }
Endnote
%0 Thesis %A Granados Vel&#225;squez, Miguel Andr&#233;s %Y Seidel, Hans-Peter %A referee: Kautz, Jan %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Advanced Editing Methods for Image and Video Sequences : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D79-9 %U urn:nbn:de:bsz:291-scidok-55021 %F OTHER: Local-ID: 2D353EDEDC2BDA47C1257BEA0053CCB8-GranadosThesis2013 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5502/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[138]
T. Helten, “Processing and Tracking Human Motions Using Optical, Inertial, and Depth Sensors,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, \textite.\,g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information.
Export
BibTeX
@phdthesis{Helten2013_PhDThesis, TITLE = {Processing and Tracking Human Motions Using Optical, Inertial, and Depth Sensors}, AUTHOR = {Helten, Thomas}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-56126}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, \textite.\,g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information.}, }
Endnote
%0 Thesis %A Helten, Thomas %Y M&#252;ller, Meinard %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Processing and Tracking Human Motions Using Optical, Inertial, and Depth Sensors : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3984-9 %U urn:nbn:de:bsz:291-scidok-56126 %F OTHER: 70346CB0842571B1C1257C58003538EF-Helten2013_PhDThesis %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, \textite.\,g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5612/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[139]
T. Jurkiewicz, “Toward Better Computation Models for Modern Machines,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on \emphthe ultimate planar convex hull algorithm} of Kirkpatrick and Seidel, and it has been referred to as \emph{the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks.
Export
BibTeX
@phdthesis{Jurkiewicz2013, TITLE = {Toward Better Computation Models for Modern Machines}, AUTHOR = {Jurkiewicz, Tomasz}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55407}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on \emphthe ultimate planar convex hull algorithm} of Kirkpatrick and Seidel, and it has been referred to as \emph{the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks.}, }
Endnote
%0 Thesis %A Jurkiewicz, Tomasz %Y Mehlhorn, Kurt %A referee: Meyer, Ulrich %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Toward Better Computation Models for Modern Machines : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0018-4A9C-B %U urn:nbn:de:bsz:291-scidok-55407 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %P 94 p. %V phd %9 phd %X Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on \emphthe ultimate planar convex hull algorithm} of Kirkpatrick and Seidel, and it has been referred to as \emph{the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many-core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5540/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[140]
J. Kerber, “Of Assembling Small Sculptures and Disassembling Large Geometry,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
This thesis describes the research results and contributions that have been achieved during the author�s doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect. The first part covers the true-to-detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief.Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike andaesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements. The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size.
Export
BibTeX
@phdthesis{Kerber2013_2, TITLE = {Of Assembling Small Sculptures and Disassembling Large Geometry}, AUTHOR = {Kerber, Jens}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55160}, LOCALID = {Local-ID: 0B9352B7950A1459C1257BF60042B83E-Kerber2013_2}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {This thesis describes the research results and contributions that have been achieved during the author{\diamond}s doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect. The first part covers the true-to-detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief.Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike andaesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements. The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size.}, }
Endnote
%0 Thesis %A Kerber, Jens %Y Seidel, Hans-Peter %A referee: Belyaev, Alexander %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Of Assembling Small Sculptures and Disassembling Large Geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D35-1 %U urn:nbn:de:bsz:291-scidok-55160 %F OTHER: Local-ID: 0B9352B7950A1459C1257BF60042B83E-Kerber2013_2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X This thesis describes the research results and contributions that have been achieved during the author&#65533;s doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect. The first part covers the true-to-detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief.Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike andaesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements. The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5516/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[141]
E. Kruglov, “Superposition Modulo Theory,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{KruglovDiss13, TITLE = {Superposition Modulo Theory}, AUTHOR = {Kruglov, Evgeny}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55597}, LOCALID = {Local-ID: F58B326B7199622DC1257C66003BEFFF-KruglovDiss13}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Kruglov, Evgeny %Y Althaus, Ernst %A referee: Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society %T Superposition Modulo Theory : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-7A1C-5 %F OTHER: Local-ID: F58B326B7199622DC1257C66003BEFFF-KruglovDiss13 %U urn:nbn:de:bsz:291-scidok-55597 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %P X, 229 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5559/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[142]
T. Lu, “Formal Verification of the Pastry Protocol,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{LuDiss13, TITLE = {Formal Verification of the {Pastry} Protocol}, AUTHOR = {Lu, Tianxiang}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55878}, LOCALID = {Local-ID: 53D311D21A10BD89C1257C66003CDFCF-LuDiss13}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Lu, Tianxiang %Y Weidenbach, Christoph %A referee: Schmitt, Peter %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Formal Verification of the Pastry Protocol : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-7A22-6 %F OTHER: Local-ID: 53D311D21A10BD89C1257C66003CDFCF-LuDiss13 %U urn:nbn:de:bsz:291-scidok-55878 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5587/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[143]
K. R. Patil, “Genome Signature based Sequence Comparison for Taxonomic Assignment and Tree Inference,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{Patil2013, TITLE = {Genome Signature based Sequence Comparison for Taxonomic Assignment and Tree Inference}, AUTHOR = {Patil, Kaustubh Raosaheb}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-52973}, LOCALID = {Local-ID: 58D1B1989200E496C1257BFF002517BF-Patil2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013-01}, }
Endnote
%0 Thesis %A Patil, Kaustubh Raosaheb %Y Lengauer, Thomas %A referee: McHardy, Alice Carolyn %+ Computational Genomics and Epidemiology, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Genomics and Epidemiology, MPI for Informatics, Max Planck Society %T Genome Signature based Sequence Comparison for Taxonomic Assignment and Tree Inference : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-7BB7-0 %U urn:nbn:de:bsz:291-scidok-52973 %F OTHER: Local-ID: 58D1B1989200E496C1257BFF002517BF-Patil2013 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5297/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[144]
L. Qu, “Sentiment Analysis with Limited Training Data,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase-level predictors and sentence similarity measures. 3. LSSVMrae model: To understand the sentiments regarding entities, we proposed LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. These models explore also different feature representations to capture the compositional semantics inherent in sentiment-bearing expressions. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks.
Export
BibTeX
@phdthesis{Qu2013, TITLE = {Sentiment Analysis with Limited Training Data}, AUTHOR = {Qu, Lizhen}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase-level predictors and sentence similarity measures. 3. LSSVMrae model: To understand the sentiments regarding entities, we proposed LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. These models explore also different feature representations to capture the compositional semantics inherent in sentiment-bearing expressions. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks.}, }
Endnote
%0 Thesis %A Qu, Lizhen %Y Weikum, Gerhard %A referee: Gemulla, Rainer %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Sentiment Analysis with Limited Training Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9796-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %P 133 p. %V phd %9 phd %X Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase-level predictors and sentence similarity measures. 3. LSSVMrae model: To understand the sentiments regarding entities, we proposed LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. These models explore also different feature representations to capture the compositional semantics inherent in sentiment-bearing expressions. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5615/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[145]
K. Scherbaum, “Data Driven Analysis of Faces from Images,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{Scherbaum2013z, TITLE = {Data Driven Analysis of Faces from Images}, AUTHOR = {Scherbaum, Kristina}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55212}, LOCALID = {Local-ID: 263F0D6B29F5A1A8C1257C600050EA30-Scherbaum2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Scherbaum, Kristina %Y Seidel, Hans-Peter %A referee: Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Data Driven Analysis of Faces from Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1D08-3 %F OTHER: Local-ID: 263F0D6B29F5A1A8C1257C600050EA30-Scherbaum2013 %U urn:nbn:de:bsz:291-scidok-55212 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5521/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[146]
M. Shaheen, “Cache Based Optimization of Stencil Computations an Algorithmic Approach,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@phdthesis{PhDThesis2013:Shaheen_Mohammed, TITLE = {Cache Based Optimization of Stencil Computations an Algorithmic Approach}, AUTHOR = {Shaheen, Mohammed}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55494}, LOCALID = {Local-ID: 112EF87E6A67B9BEC1257C2E003399CB-PhDThesis2013:Shaheen_Mohammed}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Shaheen, Mohammed %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Cache Based Optimization of Stencil Computations an Algorithmic Approach : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3A2F-7 %U urn:nbn:de:bsz:291-scidok-55494 %F OTHER: Local-ID: 112EF87E6A67B9BEC1257C2E003399CB-PhDThesis2013:Shaheen_Mohammed %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5549/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[147]
A. Stupar, “Soundtrack Recommendation for Images,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
The drastic increase in production of multimedia content has emphasized the research concerning its organization and retrieval. In this thesis, we address the problem of music retrieval when a set of images is given as input query, i.e., the problem of soundtrack recommendation for images. The task at hand is to recommend appropriate music to be played during the presentation of a given set of query images. To tackle this problem, we formulate a hypothesis that the knowledge appropriate for the task is contained in publicly available contemporary movies. Our approach, Picasso, employs similarity search techniques inside the image and music domains, harvesting movies to form a link between the domains. To achieve a fair and unbiased comparison between different soundtrack recommendation approaches, we proposed an evaluation benchmark. The evaluation results are reported for Picasso and the baseline approach, using the proposed benchmark. We further address two efficiency aspects that arise from the Picasso approach. First, we investigate the problem of processing top-K queries with set-defined selections and propose an index structure that aims at minimizing the query answering latency. Second, we address the problem of similarity search in high-dimensional spaces and propose two enhancements to the Locality Sensitive Hashing (LSH) scheme. We also investigate the prospects of a distributed similarity search algorithm based on LSH using the MapReduce framework. Finally, we give an overview of the PicasSound|a smartphone application based on the Picasso approach.
Export
BibTeX
@phdthesis{Stupar2012, TITLE = {Soundtrack Recommendation for Images}, AUTHOR = {Stupar, Aleksandar}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {The drastic increase in production of multimedia content has emphasized the research concerning its organization and retrieval. In this thesis, we address the problem of music retrieval when a set of images is given as input query, i.e., the problem of soundtrack recommendation for images. The task at hand is to recommend appropriate music to be played during the presentation of a given set of query images. To tackle this problem, we formulate a hypothesis that the knowledge appropriate for the task is contained in publicly available contemporary movies. Our approach, Picasso, employs similarity search techniques inside the image and music domains, harvesting movies to form a link between the domains. To achieve a fair and unbiased comparison between different soundtrack recommendation approaches, we proposed an evaluation benchmark. The evaluation results are reported for Picasso and the baseline approach, using the proposed benchmark. We further address two efficiency aspects that arise from the Picasso approach. First, we investigate the problem of processing top-K queries with set-defined selections and propose an index structure that aims at minimizing the query answering latency. Second, we address the problem of similarity search in high-dimensional spaces and propose two enhancements to the Locality Sensitive Hashing (LSH) scheme. We also investigate the prospects of a distributed similarity search algorithm based on LSH using the MapReduce framework. Finally, we give an overview of the PicasSound|a smartphone application based on the Picasso approach.}, }
Endnote
%0 Thesis %A Stupar, Aleksandar %Y Michel, Sebastian %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Soundtrack Recommendation for Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-9794-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %P 149 p. %V phd %9 phd %X The drastic increase in production of multimedia content has emphasized the research concerning its organization and retrieval. In this thesis, we address the problem of music retrieval when a set of images is given as input query, i.e., the problem of soundtrack recommendation for images. The task at hand is to recommend appropriate music to be played during the presentation of a given set of query images. To tackle this problem, we formulate a hypothesis that the knowledge appropriate for the task is contained in publicly available contemporary movies. Our approach, Picasso, employs similarity search techniques inside the image and music domains, harvesting movies to form a link between the domains. To achieve a fair and unbiased comparison between different soundtrack recommendation approaches, we proposed an evaluation benchmark. The evaluation results are reported for Picasso and the baseline approach, using the proposed benchmark. We further address two efficiency aspects that arise from the Picasso approach. First, we investigate the problem of processing top-K queries with set-defined selections and propose an index structure that aims at minimizing the query answering latency. Second, we address the problem of similarity search in high-dimensional spaces and propose two enhancements to the Locality Sensitive Hashing (LSH) scheme. We also investigate the prospects of a distributed similarity search algorithm based on LSH using the MapReduce framework. Finally, we give an overview of the PicasSound|a smartphone application based on the Picasso approach. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5526/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[148]
M. Sunkel, “Statistical Part-based Models for Object Detection in Large 3D Scans,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications. This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times.
Export
BibTeX
@phdthesis{SunkelThesis2013, TITLE = {Statistical Part-based Models for Object Detection in Large {3D} Scans}, AUTHOR = {Sunkel, Martin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-55128}, LOCALID = {Local-ID: D229974BF6B66B74C1257BF2004DF924-SunkelThesis2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013-09}, ABSTRACT = {3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications. This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times.}, }
Endnote
%0 Thesis %A Sunkel, Martin %Y Seidel, Hans-Peter %A referee: Wand, Michael %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Statistical Part-based Models for Object Detection in Large 3D Scans : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D3F-D %U urn:nbn:de:bsz:291-scidok-55128 %F OTHER: Local-ID: D229974BF6B66B74C1257BF2004DF924-SunkelThesis2013 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X 3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications. This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5512/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[149]
B. Taneva, “Automatic Population of Knowledge Bases with Multimodal Data about Named Entities,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Knowledge bases are of great importance for Web search, recommendations, and many Information Retrieval tasks. However, maintaining them for not so popular entities is often a bottleneck. Typically, such entities have limited textual coverage and only a few ontological facts. Moreover, these entities are not well populated with multimodal data, such as images, videos, or audio recordings. The goals in this thesis are (1) to populate a given knowledge base with multimodal data about entities, such as images or audio recordings, and (2) to ease the task of maintaining and expanding the textual knowledge about a given entity, by recommending valuable text excerpts to the contributors of knowledge bases. The thesis makes three main contributions. The first two contributions concentrate on finding images of named entities with high precision, high recall, and high visual diversity. Our main focus are less popular entities, for which the image search engines fail to retrieve good results. Our methods utilize background knowledge about the entity, such as ontological facts or a short description, and a visual-based image similarity to rank and diversify a set of candidate images. Our third contribution is an approach for extracting text contents related to a given entity. It leverages a language-model-based similarity between a short description of the entity and the text sources, and solves a budget-constraint optimization program without any assumptions on the text structure. Moreover, our approach is also able to reliably extract entity related audio excerpts from news podcasts. We derive the time boundaries from the usually very noisy audio transcriptions.
Export
BibTeX
@phdthesis{TanevaPhDThesis, TITLE = {Automatic Population of Knowledge Bases with Multimodal Data about Named Entities}, AUTHOR = {Taneva, Bilyana}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-54839}, LOCALID = {Local-ID: 28FC9CE2EBDB4763C1257BD40056934A-TanevaPhDThesis}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Knowledge bases are of great importance for Web search, recommendations, and many Information Retrieval tasks. However, maintaining them for not so popular entities is often a bottleneck. Typically, such entities have limited textual coverage and only a few ontological facts. Moreover, these entities are not well populated with multimodal data, such as images, videos, or audio recordings. The goals in this thesis are (1) to populate a given knowledge base with multimodal data about entities, such as images or audio recordings, and (2) to ease the task of maintaining and expanding the textual knowledge about a given entity, by recommending valuable text excerpts to the contributors of knowledge bases. The thesis makes three main contributions. The first two contributions concentrate on finding images of named entities with high precision, high recall, and high visual diversity. Our main focus are less popular entities, for which the image search engines fail to retrieve good results. Our methods utilize background knowledge about the entity, such as ontological facts or a short description, and a visual-based image similarity to rank and diversify a set of candidate images. Our third contribution is an approach for extracting text contents related to a given entity. It leverages a language-model-based similarity between a short description of the entity and the text sources, and solves a budget-constraint optimization program without any assumptions on the text structure. Moreover, our approach is also able to reliably extract entity related audio excerpts from news podcasts. We derive the time boundaries from the usually very noisy audio transcriptions.}, }
Endnote
%0 Thesis %A Taneva, Bilyana %Y Weikum, Gerhard %A referee: Suchanek, Fabian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Automatic Population of Knowledge Bases with Multimodal Data about Named Entities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-389C-E %U urn:nbn:de:bsz:291-scidok-54839 %F OTHER: Local-ID: 28FC9CE2EBDB4763C1257BD40056934A-TanevaPhDThesis %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X Knowledge bases are of great importance for Web search, recommendations, and many Information Retrieval tasks. However, maintaining them for not so popular entities is often a bottleneck. Typically, such entities have limited textual coverage and only a few ontological facts. Moreover, these entities are not well populated with multimodal data, such as images, videos, or audio recordings. The goals in this thesis are (1) to populate a given knowledge base with multimodal data about entities, such as images or audio recordings, and (2) to ease the task of maintaining and expanding the textual knowledge about a given entity, by recommending valuable text excerpts to the contributors of knowledge bases. The thesis makes three main contributions. The first two contributions concentrate on finding images of named entities with high precision, high recall, and high visual diversity. Our main focus are less popular entities, for which the image search engines fail to retrieve good results. Our methods utilize background knowledge about the entity, such as ontological facts or a short description, and a visual-based image similarity to rank and diversify a set of candidate images. Our third contribution is an approach for extracting text contents related to a given entity. It leverages a language-model-based similarity between a short description of the entity and the text sources, and solves a budget-constraint optimization program without any assumptions on the text structure. Moreover, our approach is also able to reliably extract entity related audio excerpts from news podcasts. We derive the time boundaries from the usually very noisy audio transcriptions. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5483/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[150]
Y. Wang, “Methods and Tools for Temporal Knowledge Harvesting,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
\chapterAbstract} To extend the traditional knowledge base with temporal dimension, this thesis offers methods and tools for harvesting temporal facts from both semi-structured and textual sources. Our contributions are briefly summarized as follows. \begin{enumerate} \item{\bf Timely YAGO:} A temporal knowledge base called Timely YAGO (T-YAGO) which extends YAGO with temporal attributes is built. We define a simple RDF-style data model to support temporal knowledge. \item{\bf PRAVDA:} To be able to harvest as many temporal facts from free-text as possible, we develop a system PRAVDA. It utilizes a graph-based semi-supervised learning algorithm to extract fact observations, which are further cleaned up by an Integer Linear Program based constraint solver. We also attempt to harvest spatio-temporal facts to track a person's trajectory. \item{\bf PRAVDA-live:} A user-centric interactive knowledge harvesting system, called PRAVDA-live, is developed for extracting facts from natural language free-text. It is built on the framework of PRAVDA. It supports fact extraction of user-defined relations from ad-hoc selected text documents and ready-to-use RDF exports. \item{\bf T-URDF:} We present a simple and efficient representation model for time-dependent uncertainty in combination with first-order inference rules and recursive queries over RDF-like knowledge bases. We adopt the common possible-worlds semantics known from probabilistic databases and extend it towards histogram-like confidence distributions that capture the validity of facts across time. \end{enumerate All of these components are fully implemented systems, which together form an integrative architecture. PRAVDA and PRAVDA-live aim at gathering new facts (particularly temporal facts), and then T-URDF reconciles them. Finally these facts are stored in a (temporal) knowledge base, called T-YAGO. A SPARQL-like time-aware querying language, together with a visualization tool, are designed for T-YAGO. Temporal knowledge can also be applied for document summarization.
Export
BibTeX
@phdthesis{Wang-thesis2013, TITLE = {Methods and Tools for Temporal Knowledge Harvesting}, AUTHOR = {Wang, Yafang}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-50967}, LOCALID = {Local-ID: 142737B17504ED10C1257B19006B30E4-Wang-thesis2013}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {\chapterAbstract} To extend the traditional knowledge base with temporal dimension, this thesis offers methods and tools for harvesting temporal facts from both semi-structured and textual sources. Our contributions are briefly summarized as follows. \begin{enumerate} \item{\bf Timely YAGO:} A temporal knowledge base called Timely YAGO (T-YAGO) which extends YAGO with temporal attributes is built. We define a simple RDF-style data model to support temporal knowledge. \item{\bf PRAVDA:} To be able to harvest as many temporal facts from free-text as possible, we develop a system PRAVDA. It utilizes a graph-based semi-supervised learning algorithm to extract fact observations, which are further cleaned up by an Integer Linear Program based constraint solver. We also attempt to harvest spatio-temporal facts to track a person's trajectory. \item{\bf PRAVDA-live:} A user-centric interactive knowledge harvesting system, called PRAVDA-live, is developed for extracting facts from natural language free-text. It is built on the framework of PRAVDA. It supports fact extraction of user-defined relations from ad-hoc selected text documents and ready-to-use RDF exports. \item{\bf T-URDF:} We present a simple and efficient representation model for time-dependent uncertainty in combination with first-order inference rules and recursive queries over RDF-like knowledge bases. We adopt the common possible-worlds semantics known from probabilistic databases and extend it towards histogram-like confidence distributions that capture the validity of facts across time. \end{enumerate All of these components are fully implemented systems, which together form an integrative architecture. PRAVDA and PRAVDA-live aim at gathering new facts (particularly temporal facts), and then T-URDF reconciles them. Finally these facts are stored in a (temporal) knowledge base, called T-YAGO. A SPARQL-like time-aware querying language, together with a visualization tool, are designed for T-YAGO. Temporal knowledge can also be applied for document summarization.}, }
Endnote
%0 Thesis %A Wang, Yafang %Y Weikum, Gerhard %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Methods and Tools for Temporal Knowledge Harvesting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3892-2 %F OTHER: Local-ID: 142737B17504ED10C1257B19006B30E4-Wang-thesis2013 %U urn:nbn:de:bsz:291-scidok-50967 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V phd %9 phd %X \chapterAbstract} To extend the traditional knowledge base with temporal dimension, this thesis offers methods and tools for harvesting temporal facts from both semi-structured and textual sources. Our contributions are briefly summarized as follows. \begin{enumerate} \item{\bf Timely YAGO:} A temporal knowledge base called Timely YAGO (T-YAGO) which extends YAGO with temporal attributes is built. We define a simple RDF-style data model to support temporal knowledge. \item{\bf PRAVDA:} To be able to harvest as many temporal facts from free-text as possible, we develop a system PRAVDA. It utilizes a graph-based semi-supervised learning algorithm to extract fact observations, which are further cleaned up by an Integer Linear Program based constraint solver. We also attempt to harvest spatio-temporal facts to track a person's trajectory. \item{\bf PRAVDA-live:} A user-centric interactive knowledge harvesting system, called PRAVDA-live, is developed for extracting facts from natural language free-text. It is built on the framework of PRAVDA. It supports fact extraction of user-defined relations from ad-hoc selected text documents and ready-to-use RDF exports. \item{\bf T-URDF:} We present a simple and efficient representation model for time-dependent uncertainty in combination with first-order inference rules and recursive queries over RDF-like knowledge bases. We adopt the common possible-worlds semantics known from probabilistic databases and extend it towards histogram-like confidence distributions that capture the validity of facts across time. \end{enumerate All of these components are fully implemented systems, which together form an integrative architecture. PRAVDA and PRAVDA-live aim at gathering new facts (particularly temporal facts), and then T-URDF reconciles them. Finally these facts are stored in a (temporal) knowledge base, called T-YAGO. A SPARQL-like time-aware querying language, together with a visualization tool, are designed for T-YAGO. Temporal knowledge can also be applied for document summarization. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5096/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
2012
[151]
R. Awadallah, “Methods for Constructing an Opinion Network for Politically Controversial Topics,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
The US presidential race, the re-election of President Hugo Chavez, and the economic crisis in Greece and other European countries are some of the controversial topics being played on the news everyday. To understand the landscape of opinions on political controversies, it would be helpful to know which politician or other stakeholder takes which position - support or opposition - on specific aspects of these topics. The work described in this thesis aims to automatically derive a map of the opinions-people network from news and other Web documents. The focus is on acquiring opinions held by various stakeholders on politically controversial topics. This opinions-people network serves as a knowledge-base of opinions in the form of hopinion holderi hopinioni htopici triples. Our system to build this knowledge-base makes use of online news sources in order to extract opinions from text snippets. These sources come with a set of unique challenges. For example, processing text snippets involves not just identifying the topic and the opinion, but also attributing that opinion to a specific opinion holder. This requires making use of deep parsing and analyzing the parse tree. Moreover, in order to ensure uniformity, both the topic as well the opinion holder should be mapped to canonical strings, and the topics should also be organized into a hierarchy. Our system relies on two main components: i) acquiring opinions which uses a combination of techniques to extract opinions from online news sources, and ii) organizing topics which crawls and extracts debates from online sources, and organizes these debates in a hierarchy of political controversial topics. We present systematic evaluations of the different components of our system, and show their high accuracies. We also present some of the different kinds of applications that require political analysis. We present some application requires political analysis such as identifying flip-floppers, political bias, and dissenters. Such applications can make use of the knowledge-base of opinions.
Export
BibTeX
@phdthesis{AwadallahPhd2012, TITLE = {Methods for Constructing an Opinion Network for Politically Controversial Topics}, AUTHOR = {Awadallah, Rawia}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {The US presidential race, the re-election of President Hugo Chavez, and the economic crisis in Greece and other European countries are some of the controversial topics being played on the news everyday. To understand the landscape of opinions on political controversies, it would be helpful to know which politician or other stakeholder takes which position -- support or opposition -- on specific aspects of these topics. The work described in this thesis aims to automatically derive a map of the opinions-people network from news and other Web documents. The focus is on acquiring opinions held by various stakeholders on politically controversial topics. This opinions-people network serves as a knowledge-base of opinions in the form of hopinion holderi hopinioni htopici triples. Our system to build this knowledge-base makes use of online news sources in order to extract opinions from text snippets. These sources come with a set of unique challenges. For example, processing text snippets involves not just identifying the topic and the opinion, but also attributing that opinion to a specific opinion holder. This requires making use of deep parsing and analyzing the parse tree. Moreover, in order to ensure uniformity, both the topic as well the opinion holder should be mapped to canonical strings, and the topics should also be organized into a hierarchy. Our system relies on two main components: i) acquiring opinions which uses a combination of techniques to extract opinions from online news sources, and ii) organizing topics which crawls and extracts debates from online sources, and organizes these debates in a hierarchy of political controversial topics. We present systematic evaluations of the different components of our system, and show their high accuracies. We also present some of the different kinds of applications that require political analysis. We present some application requires political analysis such as identifying flip-floppers, political bias, and dissenters. Such applications can make use of the knowledge-base of opinions.}, }
Endnote
%0 Thesis %A Awadallah, Rawia %Y Weikum, Gerhard %A referee: Rauber, Andreas %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Methods for Constructing an Opinion Network for Politically Controversial Topics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CC92-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X The US presidential race, the re-election of President Hugo Chavez, and the economic crisis in Greece and other European countries are some of the controversial topics being played on the news everyday. To understand the landscape of opinions on political controversies, it would be helpful to know which politician or other stakeholder takes which position - support or opposition - on specific aspects of these topics. The work described in this thesis aims to automatically derive a map of the opinions-people network from news and other Web documents. The focus is on acquiring opinions held by various stakeholders on politically controversial topics. This opinions-people network serves as a knowledge-base of opinions in the form of hopinion holderi hopinioni htopici triples. Our system to build this knowledge-base makes use of online news sources in order to extract opinions from text snippets. These sources come with a set of unique challenges. For example, processing text snippets involves not just identifying the topic and the opinion, but also attributing that opinion to a specific opinion holder. This requires making use of deep parsing and analyzing the parse tree. Moreover, in order to ensure uniformity, both the topic as well the opinion holder should be mapped to canonical strings, and the topics should also be organized into a hierarchy. Our system relies on two main components: i) acquiring opinions which uses a combination of techniques to extract opinions from online news sources, and ii) organizing topics which crawls and extracts debates from online sources, and organizes these debates in a hierarchy of political controversial topics. We present systematic evaluations of the different components of our system, and show their high accuracies. We also present some of the different kinds of applications that require political analysis. We present some application requires political analysis such as identifying flip-floppers, political bias, and dissenters. Such applications can make use of the knowledge-base of opinions. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5037/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[152]
A. Baak, “Retrieval-based Approaches for Tracking and Reconstructing Human Motions,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{PhDThesisBaak, TITLE = {Retrieval-based Approaches for Tracking and Reconstructing Human Motions}, AUTHOR = {Baak, Andreas}, LANGUAGE = {eng}, LOCALID = {Local-ID: BEB52808520FB526C1257AEE003A0264-PhDThesisBaak}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012-11}, }
Endnote
%0 Thesis %A Baak, Andreas %Y Rosenhahn, Bodo %A referee: Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Retrieval-based Approaches for Tracking and Reconstructing Human Motions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F4E1-1 %F OTHER: Local-ID: BEB52808520FB526C1257AEE003A0264-PhDThesisBaak %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5029/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[153]
A. Broschart, “Efficient Query Processing and Index Tuning Using Proximity Scores,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
n the presence of growing data, the need for efficient query processing under result quality and index size control becomes more and more a challenge to search engines. We show how to use proximity scores to make query processing effective and efficient with focus on either of the optimization goals. More precisely, we make the following contributions: • We present a comprehensive comparative analysis of proximity score models and a rigorous analysis of the potential of phrases and adapt a leading proximity score model for XML data. • We discuss the feasibility of all presented proximity score models for top-k query processing and present a novel index combining a content and proximity score that helps to accelerate top-k query processing and improves result quality. • We present a novel, distributed index tuning framework for term and term pair index lists that optimizes pruning parameters by means of well-defined optimization criteria under disk space constraints. Indexes can be tuned with emphasis on efficiency or effectiveness: the resulting indexes yield fast processing at high result quality. • We show that pruned index lists processed with a merge join outperform top-k query processing with unpruned lists at a high result quality. • Moreover, we present a hybrid index structure for improved cold cache run times.
Export
BibTeX
@phdthesis{Broschart_PhD2012, TITLE = {Efficient Query Processing and Index Tuning Using Proximity Scores}, AUTHOR = {Broschart, Andreas}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-DE4B2520B99264A3C1257B1900434A8C-Broschart_PhD2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {n the presence of growing data, the need for efficient query processing under result quality and index size control becomes more and more a challenge to search engines. We show how to use proximity scores to make query processing effective and efficient with focus on either of the optimization goals. More precisely, we make the following contributions: \mbox{$\bullet$} We present a comprehensive comparative analysis of proximity score models and a rigorous analysis of the potential of phrases and adapt a leading proximity score model for XML data. \mbox{$\bullet$} We discuss the feasibility of all presented proximity score models for top-k query processing and present a novel index combining a content and proximity score that helps to accelerate top-k query processing and improves result quality. \mbox{$\bullet$} We present a novel, distributed index tuning framework for term and term pair index lists that optimizes pruning parameters by means of well-defined optimization criteria under disk space constraints. Indexes can be tuned with emphasis on efficiency or effectiveness: the resulting indexes yield fast processing at high result quality. \mbox{$\bullet$} We show that pruned index lists processed with a merge join outperform top-k query processing with unpruned lists at a high result quality. \mbox{$\bullet$} Moreover, we present a hybrid index structure for improved cold cache run times.}, }
Endnote
%0 Thesis %A Broschart, Andreas %Y Schenkel, Ralf %Y Suel, Torsten %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Efficient Query Processing and Index Tuning Using Proximity Scores : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6275-D %F EDOC: 647546 %F OTHER: Local-ID: C1256DBF005F876D-DE4B2520B99264A3C1257B1900434A8C-Broschart_PhD2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X n the presence of growing data, the need for efficient query processing under result quality and index size control becomes more and more a challenge to search engines. We show how to use proximity scores to make query processing effective and efficient with focus on either of the optimization goals. More precisely, we make the following contributions: &#8226; We present a comprehensive comparative analysis of proximity score models and a rigorous analysis of the potential of phrases and adapt a leading proximity score model for XML data. &#8226; We discuss the feasibility of all presented proximity score models for top-k query processing and present a novel index combining a content and proximity score that helps to accelerate top-k query processing and improves result quality. &#8226; We present a novel, distributed index tuning framework for term and term pair index lists that optimizes pruning parameters by means of well-defined optimization criteria under disk space constraints. Indexes can be tuned with emphasis on efficiency or effectiveness: the resulting indexes yield fast processing at high result quality. &#8226; We show that pruned index lists processed with a merge join outperform top-k query processing with unpruned lists at a high result quality. &#8226; Moreover, we present a hybrid index structure for improved cold cache run times. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4981/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[154]
T. Crecelius, “Socially Enhanced Search and Exploration in Social Tagging Networks,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Social tagging networks have become highly popular for publishing and searching contents. Users in such networks can review, rate and comment on contents, or annotate them with keywords (social tags) to give short but exact text representations of even non-textual contents. In addition, there is an inherent support for interactions and relationships among users. Thus, users naturally form groups of friends or of common interests. We address three research areas in our work utilising these intrinsic features of social tagging networks. (1) We investigate new approaches for exploiting the social knowledge of and the relationships between users for searching and recommending relevant contents, and integrate them in a comprehensive framework, coined SENSE, for search in social tagging networks. (2) To dynamically update precomputed lists of transitive friends in descending order of their distance in user graphs of social tagging networks, we provide an algorithm for incrementally solving the all pairs shortest distance problem in large, disk-resident graphs and formally prove its correctness. (3) Since users are content providers in social tagging networks, users may keep their own data at independent, local peers that collaborate in a distributed P2P network. We provide an algorithm for such systems to counter cheating of peers in authority computations over social networks. The viability of each solution is demonstrated by extensive experiments regarding effectiveness and efficiency.
Export
BibTeX
@phdthesis{Crecelius2012, TITLE = {Socially Enhanced Search and Exploration in Social Tagging Networks}, AUTHOR = {Crecelius, Tom}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-48548}, LOCALID = {Local-ID: C1256DBF005F876D-09A3BA69BFF35ED9C12579FA002F601D-Crecelius2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Social tagging networks have become highly popular for publishing and searching contents. Users in such networks can review, rate and comment on contents, or annotate them with keywords (social tags) to give short but exact text representations of even non-textual contents. In addition, there is an inherent support for interactions and relationships among users. Thus, users naturally form groups of friends or of common interests. We address three research areas in our work utilising these intrinsic features of social tagging networks. (1) We investigate new approaches for exploiting the social knowledge of and the relationships between users for searching and recommending relevant contents, and integrate them in a comprehensive framework, coined SENSE, for search in social tagging networks. (2) To dynamically update precomputed lists of transitive friends in descending order of their distance in user graphs of social tagging networks, we provide an algorithm for incrementally solving the all pairs shortest distance problem in large, disk-resident graphs and formally prove its correctness. (3) Since users are content providers in social tagging networks, users may keep their own data at independent, local peers that collaborate in a distributed P2P network. We provide an algorithm for such systems to counter cheating of peers in authority computations over social networks. The viability of each solution is demonstrated by extensive experiments regarding effectiveness and efficiency.}, }
Endnote
%0 Thesis %A Crecelius, Tom %Y Schenkel, Ralf %A referee: Amer-Yahia, Sihem %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Socially Enhanced Search and Exploration in Social Tagging Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-620B-C %F EDOC: 647462 %F OTHER: Local-ID: C1256DBF005F876D-09A3BA69BFF35ED9C12579FA002F601D-Crecelius2012 %U urn:nbn:de:bsz:291-scidok-48548 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P 238 p. %V phd %9 phd %X Social tagging networks have become highly popular for publishing and searching contents. Users in such networks can review, rate and comment on contents, or annotate them with keywords (social tags) to give short but exact text representations of even non-textual contents. In addition, there is an inherent support for interactions and relationships among users. Thus, users naturally form groups of friends or of common interests. We address three research areas in our work utilising these intrinsic features of social tagging networks. (1) We investigate new approaches for exploiting the social knowledge of and the relationships between users for searching and recommending relevant contents, and integrate them in a comprehensive framework, coined SENSE, for search in social tagging networks. (2) To dynamically update precomputed lists of transitive friends in descending order of their distance in user graphs of social tagging networks, we provide an algorithm for incrementally solving the all pairs shortest distance problem in large, disk-resident graphs and formally prove its correctness. (3) Since users are content providers in social tagging networks, users may keep their own data at independent, local peers that collaborate in a distributed P2P network. We provide an algorithm for such systems to counter cheating of peers in authority computations over social networks. The viability of each solution is demonstrated by extensive experiments regarding effectiveness and efficiency. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4854/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[155]
D. Denev, “Methods and Models for Web Archive Crawling,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Web archives offer a rich and plentiful source of information to researchers, analysts, and legal experts. For this purpose, they gather Web sites as the sites change over time. In order to keep up to high standards of data quality, Web archives have to collect all versions of the Web sites. Due to limited resuources and technical constraints this is not possible. Therefore, Web archives consist of versions archived at various time points without guarantee for mutual consistency. This thesis presents a model for assessing the data quality in Web archives as well as a family of crawling strategies yielding high-quality captures. We distinguish between single-visit crawling strategies for exploratory and visit-revisit crawling strategies for evidentiary purposes. Single-visit strategies download every page exactly once aiming for an undistorted'' capture of the ever-changing Web. We express the quality of such the resulting capture with the blur'' quality measure. In contrast, visit-revisit strategies download every page twice. The initial downloads of all pages form the visit phase of the crawling strategy. The second downloads are grouped together in the revisit phase. These two phases enable us to check which pages changed during the crawling process. Thus, we can identify the pages that are consistent with each other. The quality of the visit-revisit captures is expressed by the coherence'' measure. Quality-conscious strategies are based on predictions of the change behaviour of individual pages. We model the Web site dynamics by Poisson processes with page-specific change rates. Furthermore, we show that these rates can be statistically predicted. Finally, we propose visualization techniques for exploring the quality of the resulting Web archives. A fully functional prototype demonstrates the practical viability of our approach.
Export
BibTeX
@phdthesis{DenevPhD2012, TITLE = {Methods and Models for Web Archive Crawling}, AUTHOR = {Denev, Dimitar}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-92B687F6B976DAC4C1257A65004F67A6-DenevPhD2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Web archives offer a rich and plentiful source of information to researchers, analysts, and legal experts. For this purpose, they gather Web sites as the sites change over time. In order to keep up to high standards of data quality, Web archives have to collect all versions of the Web sites. Due to limited resuources and technical constraints this is not possible. Therefore, Web archives consist of versions archived at various time points without guarantee for mutual consistency. This thesis presents a model for assessing the data quality in Web archives as well as a family of crawling strategies yielding high-quality captures. We distinguish between single-visit crawling strategies for exploratory and visit-revisit crawling strategies for evidentiary purposes. Single-visit strategies download every page exactly once aiming for an undistorted'' capture of the ever-changing Web. We express the quality of such the resulting capture with the blur'' quality measure. In contrast, visit-revisit strategies download every page twice. The initial downloads of all pages form the visit phase of the crawling strategy. The second downloads are grouped together in the revisit phase. These two phases enable us to check which pages changed during the crawling process. Thus, we can identify the pages that are consistent with each other. The quality of the visit-revisit captures is expressed by the coherence'' measure. Quality-conscious strategies are based on predictions of the change behaviour of individual pages. We model the Web site dynamics by Poisson processes with page-specific change rates. Furthermore, we show that these rates can be statistically predicted. Finally, we propose visualization techniques for exploring the quality of the resulting Web archives. A fully functional prototype demonstrates the practical viability of our approach.}, }
Endnote
%0 Thesis %A Denev, Dimitar %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Methods and Models for Web Archive Crawling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6217-1 %F EDOC: 647475 %F OTHER: Local-ID: C1256DBF005F876D-92B687F6B976DAC4C1257A65004F67A6-DenevPhD2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X Web archives offer a rich and plentiful source of information to researchers, analysts, and legal experts. For this purpose, they gather Web sites as the sites change over time. In order to keep up to high standards of data quality, Web archives have to collect all versions of the Web sites. Due to limited resuources and technical constraints this is not possible. Therefore, Web archives consist of versions archived at various time points without guarantee for mutual consistency. This thesis presents a model for assessing the data quality in Web archives as well as a family of crawling strategies yielding high-quality captures. We distinguish between single-visit crawling strategies for exploratory and visit-revisit crawling strategies for evidentiary purposes. Single-visit strategies download every page exactly once aiming for an undistorted'' capture of the ever-changing Web. We express the quality of such the resulting capture with the blur'' quality measure. In contrast, visit-revisit strategies download every page twice. The initial downloads of all pages form the visit phase of the crawling strategy. The second downloads are grouped together in the revisit phase. These two phases enable us to check which pages changed during the crawling process. Thus, we can identify the pages that are consistent with each other. The quality of the visit-revisit captures is expressed by the coherence'' measure. Quality-conscious strategies are based on predictions of the change behaviour of individual pages. We model the Web site dynamics by Poisson processes with page-specific change rates. Furthermore, we show that these rates can be statistically predicted. Finally, we propose visualization techniques for exploring the quality of the resulting Web archives. A fully functional prototype demonstrates the practical viability of our approach. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4937/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[156]
P. Didyk, “Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{Didyk2012, TITLE = {Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System}, AUTHOR = {Didyk, Piotr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49311}, LOCALID = {Local-ID: 92393E91F27D5B62C1257A710042EDA1-Didyk2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Didyk, Piotr %A referee: Seidel, Hans-Peter %Y Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-13CC-E %F OTHER: Local-ID: 92393E91F27D5B62C1257A710042EDA1-Didyk2012 %U urn:nbn:de:bsz:291-scidok-49311 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4931/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[157]
S. Ebert, “Semi-supervised Learning for Image Classification,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Object class recognition is an active topic in computer vision still presenting many challenges. In most approaches, this task is addressed by supervised learning algorithms that need a large quantity of labels to perform well. This leads either to small datasets (< 10,000 images) that capture only a subset of the real-world class distribution (but with a controlled and verified labeling procedure), or to large datasets that are more representative but also add more label noise. Therefore, semi-supervised learning is a promising direction. It requires only few labels while simultaneously making use of the vast amount of images available today. We address object class recognition with semi-supervised learning. These algorithms depend on the underlying structure given by the data, the image description, and the similarity measure, and the quality of the labels. This insight leads to the main research questions of this thesis: Is the structure given by labeled and unlabeled data more important than the algorithm itself? Can we improve this neighborhood structure by a better similarity metric or with more representative unlabeled data? Is there a connection between the quality of labels and the overall performance and how can we get more representative labels? We answer all these questions, i.e., we provide an extensive evaluation, we propose several graph improvements, and we introduce a novel active learning framework to get more representative labels.
Export
BibTeX
@phdthesis{EbertDiss2012, TITLE = {Semi-supervised Learning for Image Classification}, AUTHOR = {Ebert, Sandra}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-52659}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Object class recognition is an active topic in computer vision still presenting many challenges. In most approaches, this task is addressed by supervised learning algorithms that need a large quantity of labels to perform well. This leads either to small datasets (< 10,000 images) that capture only a subset of the real-world class distribution (but with a controlled and verified labeling procedure), or to large datasets that are more representative but also add more label noise. Therefore, semi-supervised learning is a promising direction. It requires only few labels while simultaneously making use of the vast amount of images available today. We address object class recognition with semi-supervised learning. These algorithms depend on the underlying structure given by the data, the image description, and the similarity measure, and the quality of the labels. This insight leads to the main research questions of this thesis: Is the structure given by labeled and unlabeled data more important than the algorithm itself? Can we improve this neighborhood structure by a better similarity metric or with more representative unlabeled data? Is there a connection between the quality of labels and the overall performance and how can we get more representative labels? We answer all these questions, i.e., we provide an extensive evaluation, we propose several graph improvements, and we introduce a novel active learning framework to get more representative labels.}, }
Endnote
%0 Thesis %A Ebert, Sandra %Y Schiele, Bernt %A referee: Bischof, Horst %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Semi-supervised Learning for Image Classification : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0013-F787-B %F OTHER: A876B5595E818773C1257B19003EA758-EbertDiss2012 %U urn:nbn:de:bsz:291-scidok-52659 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P XI, 163 p. %V phd %9 phd %X Object class recognition is an active topic in computer vision still presenting many challenges. In most approaches, this task is addressed by supervised learning algorithms that need a large quantity of labels to perform well. This leads either to small datasets (< 10,000 images) that capture only a subset of the real-world class distribution (but with a controlled and verified labeling procedure), or to large datasets that are more representative but also add more label noise. Therefore, semi-supervised learning is a promising direction. It requires only few labels while simultaneously making use of the vast amount of images available today. We address object class recognition with semi-supervised learning. These algorithms depend on the underlying structure given by the data, the image description, and the similarity measure, and the quality of the labels. This insight leads to the main research questions of this thesis: Is the structure given by labeled and unlabeled data more important than the algorithm itself? Can we improve this neighborhood structure by a better similarity metric or with more representative unlabeled data? Is there a connection between the quality of labels and the overall performance and how can we get more representative labels? We answer all these questions, i.e., we provide an extensive evaluation, we propose several graph improvements, and we introduce a novel active learning framework to get more representative labels. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5265/
[158]
S. Elbassuoni, “Effective Searching of RDF Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{Elbassuoni2011, TITLE = {Effective Searching of {RDF} Knowledge Bases}, AUTHOR = {Elbassuoni, Shady}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-5AC1FB349CA835F1C12579AB002FFB29-Elbassuoni2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Elbassuoni, Shady %Y Weikum, Gerhard %A referee: Nejdl, Wolfgang %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Effective Searching of RDF Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-5FFC-4 %F EDOC: 647461 %F OTHER: Local-ID: C1256DBF005F876D-5AC1FB349CA835F1C12579AB002FFB29-Elbassuoni2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4708/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[159]
P. Emeliyanenko, “Harnessing the Power of GPUs for Problems in Real Algebraic Geometry,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{PhDEmeliyanenko12, TITLE = {Harnessing the Power of {GPUs} for Problems in Real Algebraic Geometry}, AUTHOR = {Emeliyanenko, Pavel}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49953}, LOCALID = {Local-ID: 67210896377E6C6CC1257AFB006221B2-PhDEmeliyanenko12}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Emeliyanenko, Pavel %Y Mehlhorn, Kurt %A referee: Sagraloff, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Harnessing the Power of GPUs for Problems in Real Algebraic Geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B940-C %F OTHER: Local-ID: 67210896377E6C6CC1257AFB006221B2-PhDEmeliyanenko12 %U urn:nbn:de:bsz:291-scidok-49953 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P 168 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4995/pdf/thesis.pdfhttp://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php
[160]
M. Fouz, “Randomized Rumor Spreading in Social Networks & Complete Graphs,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{FouzDiss2012, TITLE = {Randomized Rumor Spreading in Social Networks \& Complete Graphs}, AUTHOR = {Fouz, Mahmoud}, LANGUAGE = {eng}, LOCALID = {Local-ID: 21D54E873E79BCA6C1257B0C00400A64-FouzDiss2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Fouz, Mahmoud %Y Doerr, Benjamin %A referee: Bl&#228;ser, Markus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Randomized Rumor Spreading in Social Networks & Complete Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B911-5 %F OTHER: Local-ID: 21D54E873E79BCA6C1257B0C00400A64-FouzDiss2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P II,114 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4903/pdf/thesis_1672012.pdfhttp://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php
[161]
P. M. Grosche, “Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
The goal of music information retrieval (MIR) is to develop novel strategies and techniques for organizing, exploring, accessing, and understanding music data in an efficient manner. The conversion of waveform-based audio data into semantically meaningful feature representations by the use of digital signal processing techniques is at the center of MIR and constitutes a difficult field of research because of the complexity and diversity of music signals. In this thesis, we introduce novel signal processing methods that allow for extracting musically meaningful information from audio signals. As main strategy, we exploit musical knowledge about the signals' properties to derive feature representations that show a significant degree of robustness against musical variations but still exhibit a high musical expressiveness. We apply this general strategy to three different areas of MIR: Firstly, we introduce novel techniques for extracting tempo and beat information, where we particularly consider challenging music with changing tempo and soft note onsets. Secondly, we present novel algorithms for the automated segmentation and analysis of folk song field recordings, where one has to cope with significant fluctuations in intonation and tempo as well as recording artifacts. Thirdly, we explore a cross-version approach to content-based music retrieval based on the query-by-example paradigm. In all three areas, we focus on application scenarios where strong musical variations make the extraction of musically meaningful information a challenging task.
Export
BibTeX
@phdthesis{Grosche2012, TITLE = {Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval}, AUTHOR = {Grosche, Peter Matthias}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-50576}, LOCALID = {Local-ID: 0C70626E41A89315C1257AE1004F5255-Grosche2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {The goal of music information retrieval (MIR) is to develop novel strategies and techniques for organizing, exploring, accessing, and understanding music data in an efficient manner. The conversion of waveform-based audio data into semantically meaningful feature representations by the use of digital signal processing techniques is at the center of MIR and constitutes a difficult field of research because of the complexity and diversity of music signals. In this thesis, we introduce novel signal processing methods that allow for extracting musically meaningful information from audio signals. As main strategy, we exploit musical knowledge about the signals' properties to derive feature representations that show a significant degree of robustness against musical variations but still exhibit a high musical expressiveness. We apply this general strategy to three different areas of MIR: Firstly, we introduce novel techniques for extracting tempo and beat information, where we particularly consider challenging music with changing tempo and soft note onsets. Secondly, we present novel algorithms for the automated segmentation and analysis of folk song field recordings, where one has to cope with significant fluctuations in intonation and tempo as well as recording artifacts. Thirdly, we explore a cross-version approach to content-based music retrieval based on the query-by-example paradigm. In all three areas, we focus on application scenarios where strong musical variations make the extraction of musically meaningful information a challenging task.}, }
Endnote
%0 Thesis %A Grosche, Peter Matthias %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-0D64-1 %F OTHER: Local-ID: 0C70626E41A89315C1257AE1004F5255-Grosche2012 %U urn:nbn:de:bsz:291-scidok-50576 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X The goal of music information retrieval (MIR) is to develop novel strategies and techniques for organizing, exploring, accessing, and understanding music data in an efficient manner. The conversion of waveform-based audio data into semantically meaningful feature representations by the use of digital signal processing techniques is at the center of MIR and constitutes a difficult field of research because of the complexity and diversity of music signals. In this thesis, we introduce novel signal processing methods that allow for extracting musically meaningful information from audio signals. As main strategy, we exploit musical knowledge about the signals' properties to derive feature representations that show a significant degree of robustness against musical variations but still exhibit a high musical expressiveness. We apply this general strategy to three different areas of MIR: Firstly, we introduce novel techniques for extracting tempo and beat information, where we particularly consider challenging music with changing tempo and soft note onsets. Secondly, we present novel algorithms for the automated segmentation and analysis of folk song field recordings, where one has to cope with significant fluctuations in intonation and tempo as well as recording artifacts. Thirdly, we explore a cross-version approach to content-based music retrieval based on the query-by-example paradigm. In all three areas, we focus on application scenarios where strong musical variations make the extraction of musically meaningful information a challenging task. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5057/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[162]
D. Günther, “Topological Analysis of Discrete Scalar Data,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
This thesis presents a novel computational framework that allows for a robust extraction and quantification of the Morse-Smale complex of a scalar field given on a 2- or 3-dimensional manifold. The proposed framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed complex. Using a graph theoretical formulation of this theory, we present an algorithmic library that computes the Morse-Smale complex combinatorially with an optimal complexity of O(n^2) and efficiently creates a multi-level representation of it. We explore the discrete nature of this complex, and relate it to the smooth counterpart. It is often necessary to estimate the feature strength of the individual components of the Morse-Smale complex -- the critical points and separatrices. To do so, we propose a novel output-sensitive strategy to compute the persistence of the critical points. We also extend this wellfounded concept to separatrices by introducing a novel measure of feature strength called separatrix persistence. We evaluate the applicability of our methods in a wide variety of application areas ranging from computer graphics to planetary science to computer and electron tomography.
Export
BibTeX
@phdthesis{guenther12phd, TITLE = {Topological Analysis of Discrete Scalar Data}, AUTHOR = {G{\"u}nther, David}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-50563}, LOCALID = {Local-ID: 810A1DC7D88F9AD6C1257AFD003684DC-guenther12phd}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {This thesis presents a novel computational framework that allows for a robust extraction and quantification of the Morse-Smale complex of a scalar field given on a 2- or 3-dimensional manifold. The proposed framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed complex. Using a graph theoretical formulation of this theory, we present an algorithmic library that computes the Morse-Smale complex combinatorially with an optimal complexity of O(n^2) and efficiently creates a multi-level representation of it. We explore the discrete nature of this complex, and relate it to the smooth counterpart. It is often necessary to estimate the feature strength of the individual components of the Morse-Smale complex -- the critical points and separatrices. To do so, we propose a novel output-sensitive strategy to compute the persistence of the critical points. We also extend this wellfounded concept to separatrices by introducing a novel measure of feature strength called separatrix persistence. We evaluate the applicability of our methods in a wide variety of application areas ranging from computer graphics to planetary science to computer and electron tomography.}, }
Endnote
%0 Thesis %A G&#252;nther, David %Y Weinkauf, Timo %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Topological Analysis of Discrete Scalar Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F3DC-8 %F OTHER: Local-ID: 810A1DC7D88F9AD6C1257AFD003684DC-guenther12phd %U urn:nbn:de:bsz:291-scidok-50563 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X This thesis presents a novel computational framework that allows for a robust extraction and quantification of the Morse-Smale complex of a scalar field given on a 2- or 3-dimensional manifold. The proposed framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed complex. Using a graph theoretical formulation of this theory, we present an algorithmic library that computes the Morse-Smale complex combinatorially with an optimal complexity of O(n^2) and efficiently creates a multi-level representation of it. We explore the discrete nature of this complex, and relate it to the smooth counterpart. It is often necessary to estimate the feature strength of the individual components of the Morse-Smale complex -- the critical points and separatrices. To do so, we propose a novel output-sensitive strategy to compute the persistence of the critical points. We also extend this wellfounded concept to separatrices by introducing a novel measure of feature strength called separatrix persistence. We evaluate the applicability of our methods in a wide variety of application areas ranging from computer graphics to planetary science to computer and electron tomography. %U http://scidok.sulb.uni-saarland.de/volltexte/2013/5056/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[163]
C. Hritcu, “Union, Intersection, and Refinement Types and Reasoning About Type Disjointness for Security Protocol Analysis,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
In this thesis we present two new type systems for verifying the security of cryptographic protocol models expressed in a spi-calculus and, respectively, of protocol implementations expressed in a concurrent lambda calculus. The two type systems combine prior work on renement types with union and intersection types and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the typebased analyses of cryptographic protocols. In particular, our type systems can statically analyze protocols that are based on zero-knowledge proofs, even in scenarios when certain protocol participants are compromised. The analysis is scalable and provides security proofs for an unbounded number of protocol executions. The two type systems come with mechanized proofs of correctness and efficient implementations.
Export
BibTeX
@phdthesis{Hritcu2012, TITLE = {Union, Intersection, and Refinement Types and Reasoning About Type Disjointness for Security Protocol Analysis}, AUTHOR = {Hritcu, Catalin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {In this thesis we present two new type systems for verifying the security of cryptographic protocol models expressed in a spi-calculus and, respectively, of protocol implementations expressed in a concurrent lambda calculus. The two type systems combine prior work on renement types with union and intersection types and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the typebased analyses of cryptographic protocols. In particular, our type systems can statically analyze protocols that are based on zero-knowledge proofs, even in scenarios when certain protocol participants are compromised. The analysis is scalable and provides security proofs for an unbounded number of protocol executions. The two type systems come with mechanized proofs of correctness and efficient implementations.}, }
Endnote
%0 Thesis %A Hritcu, Catalin %Y Backes, Michael %A referee: Maffei, Matteo %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Cluster of Excellence Multimodal Computing and Interaction %T Union, Intersection, and Refinement Types and Reasoning About Type Disjointness for Security Protocol Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9EFA-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X In this thesis we present two new type systems for verifying the security of cryptographic protocol models expressed in a spi-calculus and, respectively, of protocol implementations expressed in a concurrent lambda calculus. The two type systems combine prior work on renement types with union and intersection types and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the typebased analyses of cryptographic protocols. In particular, our type systems can statically analyze protocols that are based on zero-knowledge proofs, even in scenarios when certain protocol participants are compromised. The analysis is scalable and provides security proofs for an unbounded number of protocol executions. The two type systems come with mechanized proofs of correctness and efficient implementations. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4800/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[164]
V. Konz, “Automated Methods for Audio-based Music Analysis with Applications to Musicology,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@phdthesis{PhDThesisKonzVerena, TITLE = {Automated Methods for Audio-based Music Analysis with Applications to Musicology}, AUTHOR = {Konz, Verena}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-49984}, LOCALID = {Local-ID: 017FB6407E6271CBC1257AEE00350D94-PhDThesisKonzVerena}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Konz, Verena %Y M&#252;ller, Meinard %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Automated Methods for Audio-based Music Analysis with Applications to Musicology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F698-C %U urn:nbn:de:bsz:291-scidok-49984 %F OTHER: Local-ID: 017FB6407E6271CBC1257AEE00350D94-PhDThesisKonzVerena %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4998/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[165]
Y. Mileva, “Mining the Evolution of Software Component Usage,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
The topic of this thesis is the analysis of the evolution of software components. In order to track the evolution of software components, one needs to collect the evolution information of each component. This information is stored in the version control system (VCS) of the project�the repository of the history of events happening throughout the project�s lifetime. By using software archive mining techniques one can extract and leverage this information. The main contribution of this thesis is the introduction of evolution usage trends and evolution change patterns. The raw information about the occurrences of each component is stored in the VCS of the project. By organizing it in evolution trends and patterns, we are able to draw conclusions and issue recommendations concerning each individual component and the project as a whole. Evolution Trends An evolution trend is a way to track the evolution of a software component throughout the span of the project. The trend shows the increases and decreases in the usage of a specific component, which can be indicative of the quality of this component. AKTARI is a tool, presented in this thesis, that is based on such evolution trends and can be used by the software developers to observe and draw conclusions about the behavior of their project. Evolution Patterns An evolution pattern is a pattern of a frequently occurring code change throughout the span of the project. Those frequently occurring changes are project-specific and are explanatory of the way the project evolves. Each such evolution pattern contains in itself the specific way �things are done� in the project and as such can serve for defect detection and defect prevention. The technique of mining evolution patterns is implemented as a basis for the LAMARCK tool, presented in this thesis.
Export
BibTeX
@phdthesis{Mileva2012, TITLE = {Mining the Evolution of Software Component Usage}, AUTHOR = {Mileva, Yana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {The topic of this thesis is the analysis of the evolution of software components. In order to track the evolution of software components, one needs to collect the evolution information of each component. This information is stored in the version control system (VCS) of the project{\diamond}the repository of the history of events happening throughout the project{\diamond}s lifetime. By using software archive mining techniques one can extract and leverage this information. The main contribution of this thesis is the introduction of evolution usage trends and evolution change patterns. The raw information about the occurrences of each component is stored in the VCS of the project. By organizing it in evolution trends and patterns, we are able to draw conclusions and issue recommendations concerning each individual component and the project as a whole. Evolution Trends An evolution trend is a way to track the evolution of a software component throughout the span of the project. The trend shows the increases and decreases in the usage of a specific component, which can be indicative of the quality of this component. AKTARI is a tool, presented in this thesis, that is based on such evolution trends and can be used by the software developers to observe and draw conclusions about the behavior of their project. Evolution Patterns An evolution pattern is a pattern of a frequently occurring code change throughout the span of the project. Those frequently occurring changes are project-specific and are explanatory of the way the project evolves. Each such evolution pattern contains in itself the specific way {\diamond}things are done{\diamond} in the project and as such can serve for defect detection and defect prevention. The technique of mining evolution patterns is implemented as a basis for the LAMARCK tool, presented in this thesis.}, }
Endnote
%0 Thesis %A Mileva, Yana %Y Zeller, Andreas %A referee: Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Mining the Evolution of Software Component Usage : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9F5C-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V phd %9 phd %X The topic of this thesis is the analysis of the evolution of software components. In order to track the evolution of software components, one needs to collect the evolution information of each component. This information is stored in the version control system (VCS) of the project&#65533;the repository of the history of events happening throughout the project&#65533;s lifetime. By using software archive mining techniques one can extract and leverage this information. The main contribution of this thesis is the introduction of evolution usage trends and evolution change patterns. The raw information about the occurrences of each component is stored in the VCS of the project. By organizing it in evolution trends and patterns, we are able to draw conclusions and issue recommendations concerning each individual component and the project as a whole. Evolution Trends An evolution trend is a way to track the evolution of a software component throughout the span of the project. The trend shows the increases and decreases in the usage of a specific component, which can be indicative of the quality of this component. AKTARI is a tool, presented in this thesis, that is based on such evolution trends and can be used by the software developers to observe and draw conclusions about the behavior of their project. Evolution Patterns An evolution pattern is a pattern of a frequently occurring code change throughout the span of the project. Those frequently occurring changes are project-specific and are explanatory of the way the project evolves. Each such evolution pattern contains in itself the specific way &#65533;things are done&#65533; in the project and as such can serve for defect detection and defect prevention. The technique of mining evolution patterns is implemented as a basis for the LAMARCK tool, presented in this thesis. %U http://scidok.sulb.uni-saarland.de/volltexte/2012/4899/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[166]
N. Nakashole, “Automatic Extraction of Facts, Relations, and Entities for Web-scale Knowledge Base Population,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
quipping machines with knowledge, through the construction of machine-readable knowledge bases, presents a key asset for semantic search, machine translation, question answering, and other formidable challenges in artificial intelligence. However, human knowledge predominantly resides in books and other natural language text forms