The Year Before Last

Master
[1]
D. M. H. Nguyen, “Lifted Multi-Cut Optimization for Multi-Camera Multi-People Tracking,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@mastersthesis{NguyenMaster21, TITLE = {Lifted Multi-Cut Optimization for Multi-Camera Multi-People Tracking}, AUTHOR = {Nguyen, Duy Minh Ho}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Nguyen, Duy Minh Ho %Y Swoboda, Paul %A referee: Schiele, Bernt %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T Lifted Multi-Cut Optimization for Multi-Camera Multi-People Tracking : %G eng %U http://hdl.handle.net/21.11116/0000-000B-542C-6 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 69 p. %V master %9 master
PhD
[2]
A. Bhattacharyya, “Long-term future prediction under uncertainty and multi-modality,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Batphd2021, TITLE = {Long-term future prediction under uncertainty and multi-modality}, AUTHOR = {Bhattacharyya, Apratim}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-356522}, DOI = {10.22028/D291-35652}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Bhattacharyya, Apratim %Y Schiele, Bernt %A referee: Fritz, Mario %A referee: Geiger, Andreas %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society External Organizations %T Long-term future prediction under uncertainty and multi-modality : %G eng %U http://hdl.handle.net/21.11116/0000-000A-20BF-B %R 10.22028/D291-35652 %U nbn:de:bsz:291--ds-356522 %F OTHER: hdl:20.500.11880/32595 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 210 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32595
[3]
B. R. Chaudhury, “Finding Fair and Efficient Allocations,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Chaudphd2021, TITLE = {Finding Fair and Efficient Allocations}, AUTHOR = {Chaudhury, Bhaskar Ray}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-345370}, DOI = {10.22028/D291-34537}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Chaudhury, Bhaskar Ray %Y Mehlhorn, Kurt %A referee: Bringmann, Karl %A referee: Roughgarden, Tim %A referee: Moulin, Herve %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Finding Fair and Efficient Allocations : %G eng %U http://hdl.handle.net/21.11116/0000-0009-9CC9-5 %R 10.22028/D291-34537 %U nbn:de:bsz:291--ds-345370 %F OTHER: hdl:20.500.11880/31737 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 173 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31737
[4]
D. A. Durai, “Novel graph based algorithms fortranscriptome sequence analysis,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Duraiphd2020, TITLE = {Novel graph based algorithms fortranscriptome sequence analysis}, AUTHOR = {Durai, Dilip Ariyur}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-341585}, DOI = {10.22028/D291-34158}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Durai, Dilip Ariyur %Y Schulz, Marcel %A referee: Helms, Volker %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Novel graph based algorithms fortranscriptome sequence analysis : %G eng %U http://hdl.handle.net/21.11116/0000-0008-E4D6-5 %R 10.22028/D291-34158 %U urn:nbn:de:bsz:291--ds-341585 %F OTHER: hdl:20.500.11880/31478 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 143 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31478
[5]
M. H. Gad-Elrab, “Explainable Methods for Knowledge Graph Refinement and Exploration via Symbolic Reasoning,” Universität des Saarlandes, Saarbrücken, 2021.
Abstract
Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations.
Export
BibTeX
@phdthesis{Elrabphd2021, TITLE = {Explainable Methods for Knowledge Graph Refinement and Exploration via Symbolic Reasoning}, AUTHOR = {Gad-Elrab, Mohamed Hassan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-344237}, DOI = {10.22028/D291-34423}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, ABSTRACT = {Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations.}, }
Endnote
%0 Thesis %A Gad-Elrab, Mohamed Hassan %Y Weikum, Gerhard %A referee: Theobald, Martin %A referee: Stepanova, Daria %A referee: Razniewski, Simon %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Explainable Methods for Knowledge Graph Refinement and Exploration via Symbolic Reasoning : %G eng %U http://hdl.handle.net/21.11116/0000-0009-427E-0 %R 10.22028/D291-34423 %U urn:nbn:de:bsz:291--ds-344237 %F OTHER: hdl:20.500.11880/31629 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 176 p. %V phd %9 phd %X Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations. %K knowledge graphs symbolic learning embedding models rule learning Big Data %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31629
[6]
A. Ghazimatin, “Enhancing Explainability and Scrutability of Recommender Systems,” Universität des Saarlandes, Saarbrücken, 2021.
Abstract
Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modified accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: • We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ profiles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. • We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for finding the smallest counterfactual explanations. • We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-specific item representations. We evaluate all proposed models and methods with real user studies and demonstrate their benefits at achieving explainability and scrutability in recommender systems.
Export
BibTeX
@phdthesis{Ghazphd2021, TITLE = {Enhancing Explainability and Scrutability of Recommender Systems}, AUTHOR = {Ghazimatin, Azin}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-355166}, DOI = {10.22028/D291-35516}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, ABSTRACT = {Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm{\textquoteright}s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in {fi}ltering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system{\textquoteright}s behavior can be modi{fi}ed accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: \mbox{$\bullet$} We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users{\textquoteright} pro{fi}les and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. \mbox{$\bullet$} We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user{\textquoteright}s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for {fi}nding the smallest counterfactual explanations. \mbox{$\bullet$} We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-speci{fi}c item representations. We evaluate all proposed models and methods with real user studies and demonstrate their bene{fi}ts at achieving explainability and scrutability in recommender systems.}, }
Endnote
%0 Thesis %A Ghazimatin, Azin %Y Weikum, Gerhard %A referee: Saha Roy, Rishiraj %A referee: Amer-Yahia, Sihem %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Enhancing Explainability and Scrutability of Recommender Systems : %G eng %U http://hdl.handle.net/21.11116/0000-000A-3C99-7 %R 10.22028/D291-35516 %U nbn:de:bsz:291--ds-355166 %F OTHER: hdl:20.500.11880/32590 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 136 p. %V phd %9 phd %X Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modified accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: • We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ profiles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. • We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for finding the smallest counterfactual explanations. • We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-specific item representations. We evaluate all proposed models and methods with real user studies and demonstrate their benefits at achieving explainability and scrutability in recommender systems. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32590
[7]
M. Habermann, “Real-time human performance capture and synthesis,” Universität des Saarlandes, Saarbrücken, 2021.
Abstract
Most of the images one finds in the media, such as on the Internet or in textbooks and magazines, contain humans as the main point of attention. Thus, there is an inherent necessity for industry, society, and private persons to be able to thoroughly analyze and synthesize the human-related content in these images. One aspect of this analysis and subject of this thesis is to infer the 3D pose and surface deformation, using only visual information, which is also known as human performance capture. Human performance capture enables the tracking of virtual characters from real-world observations, and this is key for visual effects, games, VR, and AR, to name just a few application areas. However, traditional capture methods usually rely on expensive multi-view (marker-based) systems that are prohibitively expensive for the vast majority of people, or they use depth sensors, which are still not as common as single color cameras. Recently, some approaches have attempted to solve the task by assuming only a single RGB image is given. Nonetheless, they can either not track the dense deforming geometry of the human, such as the clothing layers, or they are far from real time, which is indispensable for many applications. To overcome these shortcomings, this thesis proposes two monocular human performance capture methods, which for the first time allow the real-time capture of the dense deforming geometry as well as an unseen 3D accuracy for pose and surface deformations. At the technical core, this work introduces novel GPU-based and data-parallel optimization strategies in conjunction with other algorithmic design choices that are all geared towards real-time performance at high accuracy. Moreover, this thesis presents a new weakly supervised multiview training strategy combined with a fully differentiable character representation that shows superior 3D accuracy. However, there is more to human-related Computer Vision than only the analysis of people in images. It is equally important to synthesize new images of humans in unseen poses and also from camera viewpoints that have not been observed in the real world. Such tools are essential for the movie industry because they, for example, allow the synthesis of photo-realistic virtual worlds with real-looking humans or of contents that are too dangerous for actors to perform on set. But also video conferencing and telepresence applications can benefit from photo-real 3D characters, as they can enhance the immersive experience of these applications. Here, the traditional Computer Graphics pipeline for rendering photo-realistic images involves many tedious and time-consuming steps that require expert knowledge and are far from real time. Traditional rendering involves character rigging and skinning, the modeling of the surface appearance properties, and physically based ray tracing. Recent learning-based methods attempt to simplify the traditional rendering pipeline and instead learn the rendering function from data resulting in methods that are easier accessible to non-experts. However, most of them model the synthesis task entirely in image space such that 3D consistency cannot be achieved, and/or they fail to model motion- and view-dependent appearance effects. To this end, this thesis presents a method and ongoing work on character synthesis, which allow the synthesis of controllable photoreal characters that achieve motion- and view-dependent appearance effects as well as 3D consistency and which run in real time. This is technically achieved by a novel coarse-to-fine geometric character representation for efficient synthesis, which can be solely supervised on multi-view imagery. Furthermore, this work shows how such a geometric representation can be combined with an implicit surface representation to boost synthesis and geometric quality.
Export
BibTeX
@phdthesis{Habermannphd2021, TITLE = {Real-time human performance capture and synthesis}, AUTHOR = {Habermann, Marc}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-349617}, DOI = {10.22028/D291-34961}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, ABSTRACT = {Most of the images one finds in the media, such as on the Internet or in textbooks and magazines, contain humans as the main point of attention. Thus, there is an inherent necessity for industry, society, and private persons to be able to thoroughly analyze and synthesize the human-related content in these images. One aspect of this analysis and subject of this thesis is to infer the 3D pose and surface deformation, using only visual information, which is also known as human performance capture. Human performance capture enables the tracking of virtual characters from real-world observations, and this is key for visual effects, games, VR, and AR, to name just a few application areas. However, traditional capture methods usually rely on expensive multi-view (marker-based) systems that are prohibitively expensive for the vast majority of people, or they use depth sensors, which are still not as common as single color cameras. Recently, some approaches have attempted to solve the task by assuming only a single RGB image is given. Nonetheless, they can either not track the dense deforming geometry of the human, such as the clothing layers, or they are far from real time, which is indispensable for many applications. To overcome these shortcomings, this thesis proposes two monocular human performance capture methods, which for the first time allow the real-time capture of the dense deforming geometry as well as an unseen 3D accuracy for pose and surface deformations. At the technical core, this work introduces novel GPU-based and data-parallel optimization strategies in conjunction with other algorithmic design choices that are all geared towards real-time performance at high accuracy. Moreover, this thesis presents a new weakly supervised multiview training strategy combined with a fully differentiable character representation that shows superior 3D accuracy. However, there is more to human-related Computer Vision than only the analysis of people in images. It is equally important to synthesize new images of humans in unseen poses and also from camera viewpoints that have not been observed in the real world. Such tools are essential for the movie industry because they, for example, allow the synthesis of photo-realistic virtual worlds with real-looking humans or of contents that are too dangerous for actors to perform on set. But also video conferencing and telepresence applications can benefit from photo-real 3D characters, as they can enhance the immersive experience of these applications. Here, the traditional Computer Graphics pipeline for rendering photo-realistic images involves many tedious and time-consuming steps that require expert knowledge and are far from real time. Traditional rendering involves character rigging and skinning, the modeling of the surface appearance properties, and physically based ray tracing. Recent learning-based methods attempt to simplify the traditional rendering pipeline and instead learn the rendering function from data resulting in methods that are easier accessible to non-experts. However, most of them model the synthesis task entirely in image space such that 3D consistency cannot be achieved, and/or they fail to model motion- and view-dependent appearance effects. To this end, this thesis presents a method and ongoing work on character synthesis, which allow the synthesis of controllable photoreal characters that achieve motion- and view-dependent appearance effects as well as 3D consistency and which run in real time. This is technically achieved by a novel coarse-to-fine geometric character representation for efficient synthesis, which can be solely supervised on multi-view imagery. Furthermore, this work shows how such a geometric representation can be combined with an implicit surface representation to boost synthesis and geometric quality.}, }
Endnote
%0 Thesis %A Habermann, Marc %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %A referee: Hilton, Adrian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Real-time human performance capture and synthesis : %G eng %U http://hdl.handle.net/21.11116/0000-0009-7D87-3 %R 10.22028/D291-34961 %U nbn:de:bsz:291--ds-349617 %F OTHER: hdl:20.500.11880/31986 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 153 p. %V phd %9 phd %X Most of the images one finds in the media, such as on the Internet or in textbooks and magazines, contain humans as the main point of attention. Thus, there is an inherent necessity for industry, society, and private persons to be able to thoroughly analyze and synthesize the human-related content in these images. One aspect of this analysis and subject of this thesis is to infer the 3D pose and surface deformation, using only visual information, which is also known as human performance capture. Human performance capture enables the tracking of virtual characters from real-world observations, and this is key for visual effects, games, VR, and AR, to name just a few application areas. However, traditional capture methods usually rely on expensive multi-view (marker-based) systems that are prohibitively expensive for the vast majority of people, or they use depth sensors, which are still not as common as single color cameras. Recently, some approaches have attempted to solve the task by assuming only a single RGB image is given. Nonetheless, they can either not track the dense deforming geometry of the human, such as the clothing layers, or they are far from real time, which is indispensable for many applications. To overcome these shortcomings, this thesis proposes two monocular human performance capture methods, which for the first time allow the real-time capture of the dense deforming geometry as well as an unseen 3D accuracy for pose and surface deformations. At the technical core, this work introduces novel GPU-based and data-parallel optimization strategies in conjunction with other algorithmic design choices that are all geared towards real-time performance at high accuracy. Moreover, this thesis presents a new weakly supervised multiview training strategy combined with a fully differentiable character representation that shows superior 3D accuracy. However, there is more to human-related Computer Vision than only the analysis of people in images. It is equally important to synthesize new images of humans in unseen poses and also from camera viewpoints that have not been observed in the real world. Such tools are essential for the movie industry because they, for example, allow the synthesis of photo-realistic virtual worlds with real-looking humans or of contents that are too dangerous for actors to perform on set. But also video conferencing and telepresence applications can benefit from photo-real 3D characters, as they can enhance the immersive experience of these applications. Here, the traditional Computer Graphics pipeline for rendering photo-realistic images involves many tedious and time-consuming steps that require expert knowledge and are far from real time. Traditional rendering involves character rigging and skinning, the modeling of the surface appearance properties, and physically based ray tracing. Recent learning-based methods attempt to simplify the traditional rendering pipeline and instead learn the rendering function from data resulting in methods that are easier accessible to non-experts. However, most of them model the synthesis task entirely in image space such that 3D consistency cannot be achieved, and/or they fail to model motion- and view-dependent appearance effects. To this end, this thesis presents a method and ongoing work on character synthesis, which allow the synthesis of controllable photoreal characters that achieve motion- and view-dependent appearance effects as well as 3D consistency and which run in real time. This is technically achieved by a novel coarse-to-fine geometric character representation for efficient synthesis, which can be solely supervised on multi-view imagery. Furthermore, this work shows how such a geometric representation can be combined with an implicit surface representation to boost synthesis and geometric quality. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31986
[8]
P. Mandros, “Discovering robust dependencies from data,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Panphd2020, TITLE = {Discovering robust dependencies from data}, AUTHOR = {Mandros, Panagiotis}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-342919}, DOI = {10.22028/D291-34291}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Mandros, Panagiotis %Y Vreeken, Jilles %A referee: Weikum, Gerhard %A referee: Webb, Geoffrey %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Discovering robust dependencies from data : %G eng %U http://hdl.handle.net/21.11116/0000-0008-E4CF-E %R 10.22028/D291-34291 %U urn:nbn:de:bsz:291--ds-342919 %F OTHER: hdl:20.500.11880/31535 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 194 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31535
[9]
A. Marx, “Information-Theoretic Causal Discovery,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Marxphd2020, TITLE = {Information-Theoretic Causal Discovery}, AUTHOR = {Marx, Alexander}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-342908}, DOI = {10.22028/D291-34290}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Marx, Alexander %Y Vreeken, Jilles %A referee: Weikum, Gerhard %A referee: Ommen, Thijs van %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Information-Theoretic Causal Discovery : %G eng %U http://hdl.handle.net/21.11116/0000-0008-EECA-9 %R 10.22028/D291-34290 %U urn:nbn:de:bsz:291--ds-342908 %F OTHER: hdl:20.500.11880/31480 %I Universität des Saarlandes %C Saarbrücken %D 2021 %P 195 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31480
[10]
S. Metzler, “Structural Building Blocks in Graph Data,” Universität des Saarlandes, Saarbrücken, 2021.
Abstract
Graph data nowadays easily become so large that it is infeasible to study the underlying structures manually. Thus, computational methods are needed to uncover large-scale structural information. In this thesis, we present methods to understand and summarise large networks.<br>We propose the hyperbolic community model to describe groups of more densely connected nodes within networks using very intuitive parameters. The model accounts for a <br>frequent connectivity pattern in real data: a few community members are highly interconnected; most members mainly have ties to this core. Our model fits real data much better than previously-proposed models. Our corresponding random graph generator, HyGen, creates graphs with realistic intra-community structure.<br>Using the hyperbolic model, we conduct a large-scale study of the temporal evolution of communities on online question–answer sites. We observe that the user activity within a community is constant with respect to its size throughout its lifetime, and a small group of users is responsible for the majority of the social interactions. <br>We propose an approach for Boolean tensor clustering. This special tensor factorisation is restricted to binary data and assumes that one of the tensor directions has only non-overlapping factors. These assumptions – valid for many real-world data, in particular time-evolving networks – enable the use of bitwise operators and lift much of the computational complexity from the task.
Export
BibTeX
@phdthesis{SaskiaDiss21, TITLE = {Structural Building Blocks in Graph Data}, AUTHOR = {Metzler, Saskia}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-335366}, DOI = {10.22028/D291-33536}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, ABSTRACT = {Graph data nowadays easily become so large that it is infeasible to study the underlying structures manually. Thus, computational methods are needed to uncover large-scale structural information. In this thesis, we present methods to understand and summarise large networks.<br>We propose the hyperbolic community model to describe groups of more densely connected nodes within networks using very intuitive parameters. The model accounts for a <br>frequent connectivity pattern in real data: a few community members are highly interconnected; most members mainly have ties to this core. Our model fits real data much better than previously-proposed models. Our corresponding random graph generator, HyGen, creates graphs with realistic intra-community structure.<br>Using the hyperbolic model, we conduct a large-scale study of the temporal evolution of communities on online question--answer sites. We observe that the user activity within a community is constant with respect to its size throughout its lifetime, and a small group of users is responsible for the majority of the social interactions. <br>We propose an approach for Boolean tensor clustering. This special tensor factorisation is restricted to binary data and assumes that one of the tensor directions has only non-overlapping factors. These assumptions -- valid for many real-world data, in particular time-evolving networks -- enable the use of bitwise operators and lift much of the computational complexity from the task.}, }
Endnote
%0 Thesis %A Metzler, Saskia %Y Miettinen, Pauli %Y Weikum, Gerhard %Y G&#252;nnemann, Stephan %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Structural Building Blocks in Graph Data : Characterised by Hyperbolic Communities and Uncovered by Boolean Tensor Clustering %G eng %U http://hdl.handle.net/21.11116/0000-0008-0BC1-2 %R 10.22028/D291-33536 %U urn:nbn:de:bsz:291--ds-335366 %F OTHER: hdl:20.500.11880/30904 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 196 p. %V phd %9 phd %X Graph data nowadays easily become so large that it is infeasible to study the underlying structures manually. Thus, computational methods are needed to uncover large-scale structural information. In this thesis, we present methods to understand and summarise large networks.<br>We propose the hyperbolic community model to describe groups of more densely connected nodes within networks using very intuitive parameters. The model accounts for a <br>frequent connectivity pattern in real data: a few community members are highly interconnected; most members mainly have ties to this core. Our model fits real data much better than previously-proposed models. Our corresponding random graph generator, HyGen, creates graphs with realistic intra-community structure.<br>Using the hyperbolic model, we conduct a large-scale study of the temporal evolution of communities on online question&#8211;answer sites. We observe that the user activity within a community is constant with respect to its size throughout its lifetime, and a small group of users is responsible for the majority of the social interactions. <br>We propose an approach for Boolean tensor clustering. This special tensor factorisation is restricted to binary data and assumes that one of the tensor directions has only non-overlapping factors. These assumptions &#8211; valid for many real-world data, in particular time-evolving networks &#8211; enable the use of bitwise operators and lift much of the computational complexity from the task. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/30904
[11]
S. Nag Chowdhury, “Text-image synergy for multimodal retrieval and annotation,” Universität des Saarlandes, Saarbrücken, 2021.
Abstract
Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.
Export
BibTeX
@phdthesis{Chowphd2021, TITLE = {Text-image synergy for multimodal retrieval and annotation}, AUTHOR = {Nag Chowdhury, Sreyasi}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-345092}, DOI = {10.22028/D291-34509}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, ABSTRACT = {Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.}, }
Endnote
%0 Thesis %A Nag Chowdhury, Sreyasi %A referee: Weikum, Gerhard %A referee: de Melo, Gerard %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Text-image synergy for multimodal retrieval and annotation : %G eng %U http://hdl.handle.net/21.11116/0000-0009-428A-1 %R 10.22028/D291-34509 %U urn:nbn:de:bsz:291--ds-345092 %F OTHER: hdl:20.500.11880/31690 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 131 p. %V phd %9 phd %X Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding.Text and images are the two most common data modalities found on the Internet. Understanding the synergy between text and images, that is, seamlessly analyzing information from these modalities may be trivial for humans, but is challenging for software systems. In this dissertation we study problems where deciphering text-image synergy is crucial for finding solutions. We propose methods and ideas that establish semantic connections between text and images in multimodal contents, and empirically show their effectiveness in four interconnected problems: Image Retrieval, Image Tag Refinement, Image-Text Alignment, and Image Captioning. Our promising results and observations open up interesting scopes for future research involving text-image data understanding. %K image retrieval image-text alignment image captioning commonsense knowledge %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31690
[12]
M. Omran, “From Pixels to People,” Universität des Saarlandes, Saarbrücken, 2021.
Abstract
Abstract<br>Humans are at the centre of a significant amount of research in computer vision.<br>Endowing machines with the ability to perceive people from visual data is an immense<br>scientific challenge with a high degree of direct practical relevance. Success in automatic<br>perception can be measured at different levels of abstraction, and this will depend on<br>which intelligent behaviour we are trying to replicate: the ability to localise persons in<br>an image or in the environment, understanding how persons are moving at the skeleton<br>and at the surface level, interpreting their interactions with the environment including<br>with other people, and perhaps even anticipating future actions. In this thesis we tackle<br>different sub-problems of the broad research area referred to as "looking at people",<br>aiming to perceive humans in images at different levels of granularity.<br>We start with bounding box-level pedestrian detection: We present a retrospective<br>analysis of methods published in the decade preceding our work, identifying various<br>strands of research that have advanced the state of the art. With quantitative exper-<br>iments, we demonstrate the critical role of developing better feature representations<br>and having the right training distribution. We then contribute two methods based<br>on the insights derived from our analysis: one that combines the strongest aspects of<br>past detectors and another that focuses purely on learning representations. The latter<br>method outperforms more complicated approaches, especially those based on hand-<br>crafted features. We conclude our work on pedestrian detection with a forward-looking<br>analysis that maps out potential avenues for future research.<br>We then turn to pixel-level methods: Perceiving humans requires us to both separate<br>them precisely from the background and identify their surroundings. To this end, we<br>introduce Cityscapes, a large-scale dataset for street scene understanding. This has since<br>established itself as a go-to benchmark for segmentation and detection. We additionally<br>develop methods that relax the requirement for expensive pixel-level annotations, focusing<br>on the task of boundary detection, i.e. identifying the outlines of relevant objects and<br>surfaces. Next, we make the jump from pixels to 3D surfaces, from localising and<br>labelling to fine-grained spatial understanding. We contribute a method for recovering<br>3D human shape and pose, which marries the advantages of learning-based and model-<br>based approaches.<br>We conclude the thesis with a detailed discussion of benchmarking practices in<br>computer vision. Among other things, we argue that the design of future datasets<br>should be driven by the general goal of combinatorial robustness besides task-specific<br>considerations.
Export
BibTeX
@phdthesis{Omranphd2021, TITLE = {From Pixels to People}, AUTHOR = {Omran, Mohamed}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-366053}, DOI = {10.22028/D291-36605}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, ABSTRACT = {Abstract<br>Humans are at the centre of a significant amount of research in computer vision.<br>Endowing machines with the ability to perceive people from visual data is an immense<br>scientific challenge with a high degree of direct practical relevance. Success in automatic<br>perception can be measured at different levels of abstraction, and this will depend on<br>which intelligent behaviour we are trying to replicate: the ability to localise persons in<br>an image or in the environment, understanding how persons are moving at the skeleton<br>and at the surface level, interpreting their interactions with the environment including<br>with other people, and perhaps even anticipating future actions. In this thesis we tackle<br>different sub-problems of the broad research area referred to as "looking at people",<br>aiming to perceive humans in images at different levels of granularity.<br>We start with bounding box-level pedestrian detection: We present a retrospective<br>analysis of methods published in the decade preceding our work, identifying various<br>strands of research that have advanced the state of the art. With quantitative exper-<br>iments, we demonstrate the critical role of developing better feature representations<br>and having the right training distribution. We then contribute two methods based<br>on the insights derived from our analysis: one that combines the strongest aspects of<br>past detectors and another that focuses purely on learning representations. The latter<br>method outperforms more complicated approaches, especially those based on hand-<br>crafted features. We conclude our work on pedestrian detection with a forward-looking<br>analysis that maps out potential avenues for future research.<br>We then turn to pixel-level methods: Perceiving humans requires us to both separate<br>them precisely from the background and identify their surroundings. To this end, we<br>introduce Cityscapes, a large-scale dataset for street scene understanding. This has since<br>established itself as a go-to benchmark for segmentation and detection. We additionally<br>develop methods that relax the requirement for expensive pixel-level annotations, focusing<br>on the task of boundary detection, i.e. identifying the outlines of relevant objects and<br>surfaces. Next, we make the jump from pixels to 3D surfaces, from localising and<br>labelling to fine-grained spatial understanding. We contribute a method for recovering<br>3D human shape and pose, which marries the advantages of learning-based and model-<br>based approaches.<br>We conclude the thesis with a detailed discussion of benchmarking practices in<br>computer vision. Among other things, we argue that the design of future datasets<br>should be driven by the general goal of combinatorial robustness besides task-specific<br>considerations.}, }
Endnote
%0 Thesis %A Omran, Mohamed %Y Schiele, Bernt %A referee: Gall, J&#252;rgen %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T From Pixels to People : Recovering Location, Shape and Pose of Humans in Images %G eng %U http://hdl.handle.net/21.11116/0000-000A-CDBF-9 %R 10.22028/D291-36605 %U nbn:de:bsz:291--ds-366053 %F OTHER: hdl:20.500.11880/33466 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 252 p. %V phd %9 phd %X Abstract<br>Humans are at the centre of a significant amount of research in computer vision.<br>Endowing machines with the ability to perceive people from visual data is an immense<br>scientific challenge with a high degree of direct practical relevance. Success in automatic<br>perception can be measured at different levels of abstraction, and this will depend on<br>which intelligent behaviour we are trying to replicate: the ability to localise persons in<br>an image or in the environment, understanding how persons are moving at the skeleton<br>and at the surface level, interpreting their interactions with the environment including<br>with other people, and perhaps even anticipating future actions. In this thesis we tackle<br>different sub-problems of the broad research area referred to as "looking at people",<br>aiming to perceive humans in images at different levels of granularity.<br>We start with bounding box-level pedestrian detection: We present a retrospective<br>analysis of methods published in the decade preceding our work, identifying various<br>strands of research that have advanced the state of the art. With quantitative exper-<br>iments, we demonstrate the critical role of developing better feature representations<br>and having the right training distribution. We then contribute two methods based<br>on the insights derived from our analysis: one that combines the strongest aspects of<br>past detectors and another that focuses purely on learning representations. The latter<br>method outperforms more complicated approaches, especially those based on hand-<br>crafted features. We conclude our work on pedestrian detection with a forward-looking<br>analysis that maps out potential avenues for future research.<br>We then turn to pixel-level methods: Perceiving humans requires us to both separate<br>them precisely from the background and identify their surroundings. To this end, we<br>introduce Cityscapes, a large-scale dataset for street scene understanding. This has since<br>established itself as a go-to benchmark for segmentation and detection. We additionally<br>develop methods that relax the requirement for expensive pixel-level annotations, focusing<br>on the task of boundary detection, i.e. identifying the outlines of relevant objects and<br>surfaces. Next, we make the jump from pixels to 3D surfaces, from localising and<br>labelling to fine-grained spatial understanding. We contribute a method for recovering<br>3D human shape and pose, which marries the advantages of learning-based and model-<br>based approaches.<br>We conclude the thesis with a detailed discussion of benchmarking practices in<br>computer vision. Among other things, we argue that the design of future datasets<br>should be driven by the general goal of combinatorial robustness besides task-specific<br>considerations. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/33466
[13]
A. Pandey, “Variety Membership Testing in Algebraic Complexity Theory,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Pandeyphd2021, TITLE = {Variety Membership Testing in Algebraic Complexity Theory}, AUTHOR = {Pandey, Anurag}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-342440}, DOI = {10.22028/D291-34244}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Pandey, Anurag %Y Bl&#228;ser, Markus %A referee: Ikenmeyer, Christian %A referee: Mahajan, Meena %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Variety Membership Testing in Algebraic Complexity Theory : %G eng %U http://hdl.handle.net/21.11116/0000-0008-E9F5-D %R 10.22028/D291-34244 %F OTHER: hdl:20.500.11880/31479 %U urn:nbn:de:bsz:291--ds-342440 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 128 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31479
[14]
M. Scherer, “Computational solutions for addressing heterogeneity in DNA methylation data,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Schererphd2020, TITLE = {Computational solutions for addressing heterogeneity in {DNA} methylation data}, AUTHOR = {Scherer, Michael}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-338080}, DOI = {10.22028/D291-33808}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Scherer, Michael %Y Lengauer, Thomas %A referee: Walther, J&#246;rn %A referee: Marschall, Tobias %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Computational solutions for addressing heterogeneity in DNA methylation data : %G eng %U http://hdl.handle.net/21.11116/0000-0008-BA18-C %R 10.22028/D291-33808 %U urn:nbn:de:bsz:291--ds-338080 %F OTHER: hdl:20.500.11880/31186 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 147 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31186
[15]
X. Shen, “Deep Latent-Variable Models for Neural Text Generation,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Shenphd2021, TITLE = {Deep Latent-Variable Models for Neural Text Generation}, AUTHOR = {Shen, Xiaoyu}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-350558}, DOI = {10.22028/D291-35055}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Shen, Xiaoyu %Y Klakow, Dietrich %A referee: Weikum, Gerhard %A referee: Sch&#252;tze, Hinrich %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Deep Latent-Variable Models for Neural Text Generation : %G eng %U http://hdl.handle.net/21.11116/0000-0009-B25D-6 %R 10.22028/D291-35055 %U nbn:de:bsz:291--ds-350558 %F OTHER: hdl:20.500.11880/32106 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 201 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32106
[16]
R. Shetty, “Adversarial Content Manipulation for Analyzing and Improving Model Robustness,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Shettyphd2020, TITLE = {Adversarial Content Manipulation for Analyzing and Improving Model Robustness}, AUTHOR = {Shetty, Rakshith}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-346515}, DOI = {10.22028/D291-34651}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Shetty, Rakshith %Y Schiele, Bernt %A referee: Fritz, Mario %A referee: Torralba, Antonio %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society External Organizations %T Adversarial Content Manipulation for Analyzing and Improving Model Robustness : %G eng %U http://hdl.handle.net/21.11116/0000-0009-5D93-9 %R 10.22028/D291-34651 %U nbn:de:bsz:291--ds-346515 %F OTHER: hdl:20.500.11880/31874 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 191 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31874
[17]
A. Tewari, “Self-supervised reconstruction and synthesis of faces,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{Tewariphd2021, TITLE = {Self-supervised reconstruction and synthesis of faces}, AUTHOR = {Tewari, Ayush}, LANGUAGE = {eng}, URL = {nbn:de:bsz:291--ds-345982}, DOI = {10.22028/D291-34598}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Tewari, Ayush %Y Theobalt, Christian %A referee: Zollh&#246;fer, Michael %A referee: Wonka, Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Self-supervised reconstruction and synthesis of faces : %G eng %U http://hdl.handle.net/21.11116/0000-0009-9CD2-A %R 10.22028/D291-34598 %U nbn:de:bsz:291--ds-345982 %F OTHER: hdl:20.500.11880/31754 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 173 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/31754
[18]
P. Wellnitz, “Counting Patterns in Strings and Graphs,” Universität des Saarlandes, Saarbrücken, 2021.
Export
BibTeX
@phdthesis{WellnitzPhD21, TITLE = {Counting Patterns in Strings and Graphs}, AUTHOR = {Wellnitz, Philip}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291--ds-350981}, DOI = {10.22028/D291-35098}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2021}, MARGINALMARK = {$\bullet$}, DATE = {2021}, }
Endnote
%0 Thesis %A Wellnitz, Philip %Y Mehlhorn, Kurt %A referee: Landau, Gad M. %A referee: Grohe, Martin %A referee: Bringmann, Karl %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Counting Patterns in Strings and Graphs : %G eng %U http://hdl.handle.net/21.11116/0000-000C-1ED8-0 %R 10.22028/D291-35098 %U urn:nbn:de:bsz:291--ds-350981 %F OTHER: hdl:20.500.11880/32103 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2021 %P 253 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32103