Last Year

Master
[1]
M. Alzayat, “PolSim: Automatic Policy Validation via Meta-Data Flow Simulation,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Every year millions of confidential data records are leaked accidentally due to bugs, misconfiguration, or operator error. These incidents are common in large, complex, and fast evolving data processing systems. Ensuring compliance with data policies is a major challenge. Thoth is an information flow control system that uses coarse-grained taint tracking to control the flow of data. This is achieved by enforcing relevant declarative policies at processes boundaries. This enforcement is applicable regardless of bugs, misconfiguration, and compromises in application code, or actions by unprivileged operators. Designing policies that make sure all and only compliant flows are allowed remains a complex and error-prone process. In this work, we introduce PolSim, a simulation tool that aids system policy designers by validating the provided policies and systematically ensuring that the system allows all and only expected flows. Our proposed simulator approximates the dynamic run-time environment, semi-automatically suggests internal flow policies based on data flow, and provides debugging hints to help policy designers develop a working policy for the intended system before deployment.
Export
BibTeX
@mastersthesis{Alzayatmaster2016, TITLE = {Pol{S}im: Automatic Policy Validation via Meta-Data Flow Simulation}, AUTHOR = {Alzayat, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-09-27}, ABSTRACT = {Every year millions of confidential data records are leaked accidentally due to bugs, misconfiguration, or operator error. These incidents are common in large, complex, and fast evolving data processing systems. Ensuring compliance with data policies is a major challenge. Thoth is an information flow control system that uses coarse-grained taint tracking to control the flow of data. This is achieved by enforcing relevant declarative policies at processes boundaries. This enforcement is applicable regardless of bugs, misconfiguration, and compromises in application code, or actions by unprivileged operators. Designing policies that make sure all and only compliant flows are allowed remains a complex and error-prone process. In this work, we introduce PolSim, a simulation tool that aids system policy designers by validating the provided policies and systematically ensuring that the system allows all and only expected flows. Our proposed simulator approximates the dynamic run-time environment, semi-automatically suggests internal flow policies based on data flow, and provides debugging hints to help policy designers develop a working policy for the intended system before deployment.}, }
Endnote
%0 Thesis %A Alzayat, Mohamed %Y Druschel, Peter %A referee: Garg, Deepak %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Group P. Druschel, Max Planck Institute for Software Systems, Max Planck Society Group D. Garg, Max Planck Institute for Software Systems, Max Planck Society %T PolSim: Automatic Policy Validation via Meta-Data Flow Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-ACCC-8 %I Universität des Saarlandes %C Saarbrücken %D 2016 %8 27.09.2016 %P 71 p. %V master %9 master %X Every year millions of confidential data records are leaked accidentally due to bugs, misconfiguration, or operator error. These incidents are common in large, complex, and fast evolving data processing systems. Ensuring compliance with data policies is a major challenge. Thoth is an information flow control system that uses coarse-grained taint tracking to control the flow of data. This is achieved by enforcing relevant declarative policies at processes boundaries. This enforcement is applicable regardless of bugs, misconfiguration, and compromises in application code, or actions by unprivileged operators. Designing policies that make sure all and only compliant flows are allowed remains a complex and error-prone process. In this work, we introduce PolSim, a simulation tool that aids system policy designers by validating the provided policies and systematically ensuring that the system allows all and only expected flows. Our proposed simulator approximates the dynamic run-time environment, semi-automatically suggests internal flow policies based on data flow, and provides debugging hints to help policy designers develop a working policy for the intended system before deployment.
[2]
S. Bozca, “Discrete Osmosis Methods for Image Processing,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Partial differential equations can model many physical phenomena and be used to simulate under computer. Osmosis, which is in the form of convection-diffusion equa- tion, has found itself many application areas in image processing. However, slow convergence of this model, which depends on incompatibility of the drift vector field used in the model, under current methods does not allow us to have a fast, and possibly real-time application area. Therefore, we get a deeper look into what incompatibility means and how it effects steady states of the osmosis process in this thesis. In addition, we evaluate several promising methods which offers substantial computational advantage over classical iterative methods.
Export
BibTeX
@mastersthesis{BozcaMaster2016, TITLE = {Discrete Osmosis Methods for Image Processing}, AUTHOR = {Bozca, Sinan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Partial differential equations can model many physical phenomena and be used to simulate under computer. Osmosis, which is in the form of convection-diffusion equa- tion, has found itself many application areas in image processing. However, slow convergence of this model, which depends on incompatibility of the drift vector field used in the model, under current methods does not allow us to have a fast, and possibly real-time application area. Therefore, we get a deeper look into what incompatibility means and how it effects steady states of the osmosis process in this thesis. In addition, we evaluate several promising methods which offers substantial computational advantage over classical iterative methods.}, }
Endnote
%0 Thesis %A Bozca, Sinan %Y Weickert, Joachim %A referee: Augustin, Matthias %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Universität des Saarlandes Universität des Saarlandes %T Discrete Osmosis Methods for Image Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41DD-1 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 43 p. %V master %9 master %X Partial differential equations can model many physical phenomena and be used to simulate under computer. Osmosis, which is in the form of convection-diffusion equa- tion, has found itself many application areas in image processing. However, slow convergence of this model, which depends on incompatibility of the drift vector field used in the model, under current methods does not allow us to have a fast, and possibly real-time application area. Therefore, we get a deeper look into what incompatibility means and how it effects steady states of the osmosis process in this thesis. In addition, we evaluate several promising methods which offers substantial computational advantage over classical iterative methods.
[3]
C. X. Chu, “Mining How-to Task Knowledge from Online Communities,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@mastersthesis{ChuMSc2016, TITLE = {Mining How-to Task Knowledge from Online Communities}, AUTHOR = {Chu, Cuong Xuan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Chu, Cuong Xuan %Y Weikum, Gerhard %A referee: Vreeken, Jilles %A referee: Tandon, Niket %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Mining How-to Task Knowledge from Online Communities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-491D-B %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 66 p. %V master %9 master
[4]
O. Darwish, “Market Equilibrium Computation for the Linear Arrow-Debreu Model,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
The problem of market equilibrium is defined as the problem of finding prices for the goods such that the supply in the market is equal to the demand. The problem is applicable to several market models, like the linear Arrow-Debreu model, which is one of the fundamental economic market models. Over the years, various algorithms have been developed to compute the market equilibrium of the linear Arrow-Debreu model. In 2013, Duan and Mehlhorn presented the first combinatorial polynomial time algorithm for computing the market equilibrium of this model. In this thesis, we optimize, generalize, and implement the Duan-Mehlhorn algorithm. We present a novel algorithm for computing balanced ows in equality networks, which is an application of parametric ows. This algorithm outperforms the current best algorithm for computing balanced ows; hence, it improves Duan-Mehlhorn's algorithm by almost a factor of n, which is the size of the network. Moreover, we generalize Duan-Mehlhorn's algorithm by relaxing some of its assumptions. Finally, we describe our approach for implementing Duan-Mehlhorn's algorithm. The preliminary results of our implementation - based on random utility instances - show that the running time of the implementation scales significantly better than the theoretical time complexity.
Export
BibTeX
@mastersthesis{DarwishMaster2016, TITLE = {Market Equilibrium Computation for the Linear Arrow-Debreu Model}, AUTHOR = {Darwish, Omar}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-03-31}, ABSTRACT = {The problem of market equilibrium is defined as the problem of finding prices for the goods such that the supply in the market is equal to the demand. The problem is applicable to several market models, like the linear Arrow-Debreu model, which is one of the fundamental economic market models. Over the years, various algorithms have been developed to compute the market equilibrium of the linear Arrow-Debreu model. In 2013, Duan and Mehlhorn presented the first combinatorial polynomial time algorithm for computing the market equilibrium of this model. In this thesis, we optimize, generalize, and implement the Duan-Mehlhorn algorithm. We present a novel algorithm for computing balanced ows in equality networks, which is an application of parametric ows. This algorithm outperforms the current best algorithm for computing balanced ows; hence, it improves Duan-Mehlhorn's algorithm by almost a factor of n, which is the size of the network. Moreover, we generalize Duan-Mehlhorn's algorithm by relaxing some of its assumptions. Finally, we describe our approach for implementing Duan-Mehlhorn's algorithm. The preliminary results of our implementation -- based on random utility instances -- show that the running time of the implementation scales significantly better than the theoretical time complexity.}, }
Endnote
%0 Thesis %A Darwish, Omar %Y Mehlhorn, Kurt %A referee: Hoefer, Martin %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Market Equilibrium Computation for the Linear Arrow-Debreu Model : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41D0-C %I Universität des Saarlandes %C Saarbrücken %D 2016 %8 31.03.2016 %P 73 p. %V master %9 master %X The problem of market equilibrium is defined as the problem of finding prices for the goods such that the supply in the market is equal to the demand. The problem is applicable to several market models, like the linear Arrow-Debreu model, which is one of the fundamental economic market models. Over the years, various algorithms have been developed to compute the market equilibrium of the linear Arrow-Debreu model. In 2013, Duan and Mehlhorn presented the first combinatorial polynomial time algorithm for computing the market equilibrium of this model. In this thesis, we optimize, generalize, and implement the Duan-Mehlhorn algorithm. We present a novel algorithm for computing balanced ows in equality networks, which is an application of parametric ows. This algorithm outperforms the current best algorithm for computing balanced ows; hence, it improves Duan-Mehlhorn's algorithm by almost a factor of n, which is the size of the network. Moreover, we generalize Duan-Mehlhorn's algorithm by relaxing some of its assumptions. Finally, we describe our approach for implementing Duan-Mehlhorn's algorithm. The preliminary results of our implementation - based on random utility instances - show that the running time of the implementation scales significantly better than the theoretical time complexity.
[5]
A. El-Korashy, “A Formal Model for Capability Machines An Illustrative Case Study towards Secure Compilation to CHERI,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Vulnerabilities in computer systems arise in part due to programmer's logical errors, and in part also due to programmer's false (i.e., over-optimistic) expectations about the guarantees that are given by the abstractions of a programming language. For the latter kind of vulnerabilities, architectures with hardware or instructionlevel support for protection mechanisms can be useful. One trend in computer systems protection is hardware-supported enforcement of security guarantees/policies. Capability-based machines are one instance of hardware-based protection mechanisms. CHERI is a recent implementation of a 64-bit MIPS-based capability architecture with byte-granularity memory protection. The goal of this thesis is to provide a paper formal model of the CHERI architecture with the aim of formal reasoning about the security guarantees that can be offered by the features of CHERI. We first give simplified instruction operational semantics, then we prove that capabilities are unforgeable in our model. Second, we show that existing techniques for enforcing control-flow integrity can be adapted to the CHERI ISA. Third, we show that one notion of memory compartmentalization can be achieved with the help of CHERI's memory protection. We conclude by suggesting other security building blocks that would be helpful to reason about, and laying down a plan for potentially using this work for building a secure compiler, i.e., a compiler that preserves security properties. The outlook and motivation for this work is to highlight the potential of using CHERI as a target architecture for secure compilation.
Export
BibTeX
@mastersthesis{El-KorashyMaster2016, TITLE = {A Formal Model for Capability Machines An Illustrative Case Study towards Secure Compilation to {CHERI}}, AUTHOR = {El-Korashy, Akram}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Vulnerabilities in computer systems arise in part due to programmer's logical errors, and in part also due to programmer's false (i.e., over-optimistic) expectations about the guarantees that are given by the abstractions of a programming language. For the latter kind of vulnerabilities, architectures with hardware or instructionlevel support for protection mechanisms can be useful. One trend in computer systems protection is hardware-supported enforcement of security guarantees/policies. Capability-based machines are one instance of hardware-based protection mechanisms. CHERI is a recent implementation of a 64-bit MIPS-based capability architecture with byte-granularity memory protection. The goal of this thesis is to provide a paper formal model of the CHERI architecture with the aim of formal reasoning about the security guarantees that can be offered by the features of CHERI. We first give simplified instruction operational semantics, then we prove that capabilities are unforgeable in our model. Second, we show that existing techniques for enforcing control-flow integrity can be adapted to the CHERI ISA. Third, we show that one notion of memory compartmentalization can be achieved with the help of CHERI's memory protection. We conclude by suggesting other security building blocks that would be helpful to reason about, and laying down a plan for potentially using this work for building a secure compiler, i.e., a compiler that preserves security properties. The outlook and motivation for this work is to highlight the potential of using CHERI as a target architecture for secure compilation.}, }
Endnote
%0 Thesis %A El-Korashy, Akram %Y Patrignani, Marco %A referee: Garg, Deepak %A referee: Reineke, Jan %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Max Planck Institute for Software Systems, Max Planck Society Max Planck Institute for Software Systems, Max Planck Society Universität des Saarlandes %T A Formal Model for Capability Machines An Illustrative Case Study towards Secure Compilation to CHERI : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41CA-B %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 89 p. %V master %9 master %X Vulnerabilities in computer systems arise in part due to programmer's logical errors, and in part also due to programmer's false (i.e., over-optimistic) expectations about the guarantees that are given by the abstractions of a programming language. For the latter kind of vulnerabilities, architectures with hardware or instructionlevel support for protection mechanisms can be useful. One trend in computer systems protection is hardware-supported enforcement of security guarantees/policies. Capability-based machines are one instance of hardware-based protection mechanisms. CHERI is a recent implementation of a 64-bit MIPS-based capability architecture with byte-granularity memory protection. The goal of this thesis is to provide a paper formal model of the CHERI architecture with the aim of formal reasoning about the security guarantees that can be offered by the features of CHERI. We first give simplified instruction operational semantics, then we prove that capabilities are unforgeable in our model. Second, we show that existing techniques for enforcing control-flow integrity can be adapted to the CHERI ISA. Third, we show that one notion of memory compartmentalization can be achieved with the help of CHERI's memory protection. We conclude by suggesting other security building blocks that would be helpful to reason about, and laying down a plan for potentially using this work for building a secure compiler, i.e., a compiler that preserves security properties. The outlook and motivation for this work is to highlight the potential of using CHERI as a target architecture for secure compilation.
[6]
A. Hanka, “Material Appearance Editing in Complex Volume and Surface Renderings,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.
Export
BibTeX
@mastersthesis{HankaMSc2016, TITLE = {Material Appearance Editing in Complex Volume and Surface Renderings}, AUTHOR = {Hanka, Adam}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-03-31}, ABSTRACT = {When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.}, }
Endnote
%0 Thesis %A Hanka, Adam %Y Ritschel, Tobias %A referee: Slusallek, Philipp %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Material Appearance Editing in Complex Volume and Surface Renderings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41E0-8 %I Universität des Saarlandes %C Saarbrücken %D 2016 %8 31.03.2016 %P 51 p. %V master %9 master %X When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.
[7]
A. Mokarian Forooshani, “Deep Learning for Filling Blanks in Image Captions,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@mastersthesis{MokarianForooshaniMaster2016, TITLE = {Deep Learning for Filling Blanks in Image Captions}, AUTHOR = {Mokarian Forooshani, Ashkan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Mokarian Forooshani, Ashkan %Y Fritz, Mario %A referee: Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Deep Learning for Filling Blanks in Image Captions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-1FA3-7 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 66 p. %V master %9 master
[8]
R. Sethi, “Evaluation of Population-Based Haplotype Phasing Algorithms,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
The valuable information in correct order of alleles on the haplotypes has many applications in GWAS studies and population genetics. A considerable number of computational and statistical algorithms have been developed for haplotype phasing. Historically, these algorithms were compared using the simulated population data with less dense markers which was inspired by genotype data from the HapMap project. Currently due to the advancement and reduction in cost of NGS, thousands of individuals across the world have been sequenced in 1000 Genomes Project. This has generated the genotype information of individuals from different ethnicity along with much denser genetic variations in them. Here, we have developed a scalable approach to assess state-of-the-art population-based haplotype phasing algorithms with benchmark data designed by simulation of the population (unrelated and related individuals), NGS pipeline and genotype calling. The most accurate algorithm was MVNCall (v1) for phase inference in unrelated individuals while DuoHMM approach of Shapeit (v2) had lowest switch error rate of 0.298 %(with true genotype likelihoods) in the related individuals. Moreover, we also conducted a comprehensive assessment of algorithms for the imputation of missing genotypes in the population with a reference panel. For this metrics, Impute2 (v2.3.2) and Beagle (v4.1) both performed competitively under different imputation scenarios and had genotype concordance rate of >99%. However, Impute2 was better in imputation of genotypes with minor allele frequency of <0.025 in the reference panel.
Export
BibTeX
@mastersthesis{SethiMaster2016, TITLE = {Evaluation of Population-Based Haplotype Phasing Algorithms}, AUTHOR = {Sethi, Riccha}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-03-09}, ABSTRACT = {The valuable information in correct order of alleles on the haplotypes has many applications in GWAS studies and population genetics. A considerable number of computational and statistical algorithms have been developed for haplotype phasing. Historically, these algorithms were compared using the simulated population data with less dense markers which was inspired by genotype data from the HapMap project. Currently due to the advancement and reduction in cost of NGS, thousands of individuals across the world have been sequenced in 1000 Genomes Project. This has generated the genotype information of individuals from different ethnicity along with much denser genetic variations in them. Here, we have developed a scalable approach to assess state-of-the-art population-based haplotype phasing algorithms with benchmark data designed by simulation of the population (unrelated and related individuals), NGS pipeline and genotype calling. The most accurate algorithm was MVNCall (v1) for phase inference in unrelated individuals while DuoHMM approach of Shapeit (v2) had lowest switch error rate of 0.298 %(with true genotype likelihoods) in the related individuals. Moreover, we also conducted a comprehensive assessment of algorithms for the imputation of missing genotypes in the population with a reference panel. For this metrics, Impute2 (v2.3.2) and Beagle (v4.1) both performed competitively under different imputation scenarios and had genotype concordance rate of >99%. However, Impute2 was better in imputation of genotypes with minor allele frequency of <0.025 in the reference panel.}, }
Endnote
%0 Thesis %A Sethi, Riccha %Y Marschall, Tobias %A referee: Pfeifer, Nico %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Evaluation of Population-Based Haplotype Phasing Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41DA-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %8 09.03.2016 %P 76 p. %V master %9 master %X The valuable information in correct order of alleles on the haplotypes has many applications in GWAS studies and population genetics. A considerable number of computational and statistical algorithms have been developed for haplotype phasing. Historically, these algorithms were compared using the simulated population data with less dense markers which was inspired by genotype data from the HapMap project. Currently due to the advancement and reduction in cost of NGS, thousands of individuals across the world have been sequenced in 1000 Genomes Project. This has generated the genotype information of individuals from different ethnicity along with much denser genetic variations in them. Here, we have developed a scalable approach to assess state-of-the-art population-based haplotype phasing algorithms with benchmark data designed by simulation of the population (unrelated and related individuals), NGS pipeline and genotype calling. The most accurate algorithm was MVNCall (v1) for phase inference in unrelated individuals while DuoHMM approach of Shapeit (v2) had lowest switch error rate of 0.298 %(with true genotype likelihoods) in the related individuals. Moreover, we also conducted a comprehensive assessment of algorithms for the imputation of missing genotypes in the population with a reference panel. For this metrics, Impute2 (v2.3.2) and Beagle (v4.1) both performed competitively under different imputation scenarios and had genotype concordance rate of >99%. However, Impute2 was better in imputation of genotypes with minor allele frequency of <0.025 in the reference panel.
[9]
B. T. Teklehaimanot, “Virtualization of Video Streaming Functions,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Edgeware is a leading provider of video streaming solutions to network and service operators. The Edgeware Video Consolidation Platform(VCP) is a complete video streaming solution consisting of the Convoy Management system and Orbit streaming servers. The Orbit streaming servers are purpose designed hardware platforms which are composed of a dedicated hardware streaming engine and a purpose designed flash as a storage system. The Orbit streaming server is an accelerated HTTP streaming cache server which have up to 80 Gbps bandwidth and can stream to 128000 clients from a single rack unit. In line with the new trend of moving more and more functionalities towards a virtualized or software environment, the main goal of this thesis is to make a performance comparison between Edgeware’s Orbit streaming server and one of the best generic HTTP accelerators(reverse proxy severs) after implementing logging functionality of the Orbit on top of it. This is achieved by implementing test cases for the use cases that can help to evaluate those servers. Finally, after evaluating those proxy servers Varnish is selected and then compared the modified Varnish and Orbit to investigate the performance difference.
Export
BibTeX
@mastersthesis{TeklehaimanotMaster2016, TITLE = {Virtualization of Video Streaming Functions}, AUTHOR = {Teklehaimanot, Birhan Tadele}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-04-25}, ABSTRACT = {Edgeware is a leading provider of video streaming solutions to network and service operators. The Edgeware Video Consolidation Platform(VCP) is a complete video streaming solution consisting of the Convoy Management system and Orbit streaming servers. The Orbit streaming servers are purpose designed hardware platforms which are composed of a dedicated hardware streaming engine and a purpose designed flash as a storage system. The Orbit streaming server is an accelerated HTTP streaming cache server which have up to 80 Gbps bandwidth and can stream to 128000 clients from a single rack unit. In line with the new trend of moving more and more functionalities towards a virtualized or software environment, the main goal of this thesis is to make a performance comparison between Edgeware{\textquoteright}s Orbit streaming server and one of the best generic HTTP accelerators(reverse proxy severs) after implementing logging functionality of the Orbit on top of it. This is achieved by implementing test cases for the use cases that can help to evaluate those servers. Finally, after evaluating those proxy servers Varnish is selected and then compared the modified Varnish and Orbit to investigate the performance difference.}, }
Endnote
%0 Thesis %A Teklehaimanot, Birhan Tadele %Y Appelquist, G&#246;ran %A referee: Herfet, Thorsten %+ International Max Planck Research School, MPI for Informatics, Max Planck Society &#8206;CTO at Edgeware AB Universit&#228;t des Saarlandes %T Virtualization of Video Streaming Functions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-570D-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %8 25.04.2016 %P 58 p. %V master %9 master %X Edgeware is a leading provider of video streaming solutions to network and service operators. The Edgeware Video Consolidation Platform(VCP) is a complete video streaming solution consisting of the Convoy Management system and Orbit streaming servers. The Orbit streaming servers are purpose designed hardware platforms which are composed of a dedicated hardware streaming engine and a purpose designed flash as a storage system. The Orbit streaming server is an accelerated HTTP streaming cache server which have up to 80 Gbps bandwidth and can stream to 128000 clients from a single rack unit. In line with the new trend of moving more and more functionalities towards a virtualized or software environment, the main goal of this thesis is to make a performance comparison between Edgeware&#8217;s Orbit streaming server and one of the best generic HTTP accelerators(reverse proxy severs) after implementing logging functionality of the Orbit on top of it. This is achieved by implementing test cases for the use cases that can help to evaluate those servers. Finally, after evaluating those proxy servers Varnish is selected and then compared the modified Varnish and Orbit to investigate the performance difference.
[10]
M. Zheng, “Comparison of Software Tools for microRNA Next Generation Sequencing Data Analysis,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Next-generation sequencing (NGS) appears to be very promising to study miRNAs comprehensively, which can not only profile known miRNAs, but also predict novel miRNAs. There are an increasing number of software tools developed for microRNA NGS data analysis. Nevertheless, an overall comparison of these tools is still rare and how divergent these software tools are is still unknown, which confuses the researchers to select an optimal tool. In our study, we performed a comprehensive comparison of seven representative software tools based on real data in various aspects, including detected known miRNAs, miRNAs abundance, differential expression and predicted novel miRNAs. We presented the divergences and similarities of these tools and gave some basic evaluation of the tools’ performances. In addition, some extreme cases in miRNAkey were explored. The comparison of these tools suggests that the performances of these software tools are very diverse and the caution is necessary to take when choosing a software tool. The summarization of the tools’ features and comparison of their performances in our study will provide useful information for the researchers to promote their selection of an appropriate software tool.
Export
BibTeX
@mastersthesis{ZhengMaster2016, TITLE = {Comparison of Software Tools for {microRNA} Next Generation Sequencing Data Analysis}, AUTHOR = {Zheng, Menglin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-09-12}, ABSTRACT = {Next-generation sequencing (NGS) appears to be very promising to study miRNAs comprehensively, which can not only profile known miRNAs, but also predict novel miRNAs. There are an increasing number of software tools developed for microRNA NGS data analysis. Nevertheless, an overall comparison of these tools is still rare and how divergent these software tools are is still unknown, which confuses the researchers to select an optimal tool. In our study, we performed a comprehensive comparison of seven representative software tools based on real data in various aspects, including detected known miRNAs, miRNAs abundance, differential expression and predicted novel miRNAs. We presented the divergences and similarities of these tools and gave some basic evaluation of the tools{\textquoteright} performances. In addition, some extreme cases in miRNAkey were explored. The comparison of these tools suggests that the performances of these software tools are very diverse and the caution is necessary to take when choosing a software tool. The summarization of the tools{\textquoteright} features and comparison of their performances in our study will provide useful information for the researchers to promote their selection of an appropriate software tool.}, }
Endnote
%0 Thesis %A Zheng, Menglin %Y Backes, Christina %A referee: Keller, Andreas %A referee: Meese, Eckart %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Clinical Bioinformatics, Saarland University Clinical Bioinformatics, Saarland University Institute of Human Genetics, Saarland University Homburg %T Comparison of Software Tools for microRNA Next Generation Sequencing Data Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-570A-6 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %8 12.09.2016 %P 70 p. %V master %9 master %X Next-generation sequencing (NGS) appears to be very promising to study miRNAs comprehensively, which can not only profile known miRNAs, but also predict novel miRNAs. There are an increasing number of software tools developed for microRNA NGS data analysis. Nevertheless, an overall comparison of these tools is still rare and how divergent these software tools are is still unknown, which confuses the researchers to select an optimal tool. In our study, we performed a comprehensive comparison of seven representative software tools based on real data in various aspects, including detected known miRNAs, miRNAs abundance, differential expression and predicted novel miRNAs. We presented the divergences and similarities of these tools and gave some basic evaluation of the tools&#8217; performances. In addition, some extreme cases in miRNAkey were explored. The comparison of these tools suggests that the performances of these software tools are very diverse and the caution is necessary to take when choosing a software tool. The summarization of the tools&#8217; features and comparison of their performances in our study will provide useful information for the researchers to promote their selection of an appropriate software tool.
PhD
[11]
N. Azmy, “A Machine-checked Proof of Correctness of Pastry,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols
Export
BibTeX
@phdthesis{Azmyphd16, TITLE = {A Machine-checked Proof of Correctness of Pastry}, AUTHOR = {Azmy, Noran}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67309}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols}, }
Endnote
%0 Thesis %A Azmy, Noran %Y Weidenbach, Christoph %A referee: Merz, Stephan %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T A Machine-checked Proof of Correctness of Pastry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-3BAD-9 %U urn:nbn:de:bsz:291-scidok-67309 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P ix, 119 p. %V phd %9 phd %X A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called LuPastry, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the TLA+ Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present LuPastry+, a revised TLA+ specification of LuPastry. Aside from needed bug fixes, LuPastry+ contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete TLA+ proof of correct delivery for LuPastry+. Third, I prove that the final step of the node join process of LuPastry/LuPastry+ is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by Simplified LuPastry+, and prove correct delivery of lookup messages for this new specification. The proof of correctness of Simplified LuPastry+ is written by reusing the proof for LuPastry+, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the TLA+ language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6730/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[12]
M. Bachynskyi, “Biomechanical Models for Human-Computer Interaction,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.
Export
BibTeX
@phdthesis{Bachyphd16, TITLE = {Biomechanical Models for Human-Computer Interaction}, AUTHOR = {Bachynskyi, Myroslav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66888}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity.}, }
Endnote
%0 Thesis %A Bachynskyi, Myroslav %Y Steimle, J&#252;rgen %A referee: Schmidt, Albrecht %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Biomechanical Models for Human-Computer Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-0FD4-9 %U urn:nbn:de:bsz:291-scidok-66888 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xiv, 206 p. %V phd %9 phd %X Post-desktop user interfaces, such as smartphones, tablets, interactive tabletops, public displays and mid-air interfaces, already are a ubiquitous part of everyday human life, or have the potential to be. One of the key features of these interfaces is the reduced number or even absence of input movement constraints imposed by a device form-factor. This freedom is advantageous for users, allowing them to interact with computers using more natural limb movements; however, it is a source of 4 issues for research and design of post-desktop interfaces which make traditional analysis methods inefficient: the new movement space is orders of magnitude larger than the one analyzed for traditional desktops; the existing knowledge on post-desktop input methods is sparse and sporadic; the movement space is non-uniform with respect to performance; and traditional methods are ineffective or inefficient in tackling physical ergonomics pitfalls in post-desktop interfaces. These issues lead to the research problem of efficient assessment, analysis and design methods for high-throughput ergonomic post-desktop interfaces. To solve this research problem and support researchers and designers, this thesis proposes efficient experiment- and model-based assessment methods for post-desktop user interfaces. We achieve this through the following contributions: - adopt optical motion capture and biomechanical simulation for HCI experiments as a versatile source of both performance and ergonomics data describing an input method; - identify applicability limits of the method for a range of HCI tasks; - validate the method outputs against ground truth recordings in typical HCI setting; - demonstrate the added value of the method in analysis of performance and ergonomics of touchscreen devices; and - summarize performance and ergonomics of a movement space through a clustering of physiological data. The proposed method successfully deals with the 4 above-mentioned issues of post-desktop input. The efficiency of the methods makes it possible to effectively tackle the issue of large post-desktop movement spaces both at early design stages (through a generic model of a movement space) as well as at later design stages (through user studies). The method provides rich data on physical ergonomics (joint angles and moments, muscle forces and activations, energy expenditure and fatigue), making it possible to solve the issue of ergonomics pitfalls. Additionally, the method provides performance data (speed, accuracy and throughput) which can be related to the physiological data to solve the issue of non-uniformity of movement space. In our adaptation the method does not require experimenters to have specialized expertise, thus making it accessible to a wide range of researchers and designers and contributing towards the solution of the issue of post-desktop knowledge sparsity. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6688/
[13]
W.-C. Chiu, “Bayesian Non-Parametrics for Multi-Modal Segmentation,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{walonPhDThesis2016, TITLE = {Bayesian Non-Parametrics for Multi-Modal Segmentation}, AUTHOR = {Chiu, Wei-Chen}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66378}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Chiu, Wei-Chen %Y Fritz, Mario %A referee: Demberg, Vera %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Bayesian Non-Parametrics for Multi-Modal Segmentation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-788A-F %U urn:nbn:de:bsz:291-scidok-66378 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XII, 155 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6637/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[14]
L. Del Corro, “Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{delcorrophd15, TITLE = {Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text}, AUTHOR = {Del Corro, Luciano}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Del Corro, Luciano %Y Gemulla, Rainer %A referee: Ponzetto, Simone Paolo %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Methods for Open Information Extraction and Sense Disambiguation on Natural Language Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-B3DB-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xiv, 101 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6346/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[15]
N. T. Doncheva, “Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{DonchevaPhD2016, TITLE = {Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins}, AUTHOR = {Doncheva, Nadezhda Tsankova}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65957}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Doncheva, Nadezhda Tsankova %Y Albrecht, Mario %A referee: Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Network Biology Methods for Functional Characterization and Integrative Prioritization of Disease Genes and Proteins : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-1921-A %U urn:nbn:de:bsz:291-scidok-65957 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XII, 242 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6595/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[16]
O. Elek, “Efficient Methods for Physically-based Rendering of Participating Media,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{ElekPhD2016, TITLE = {Efficient Methods for Physically-based Rendering of Participating Media}, AUTHOR = {Elek, Oskar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65357}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Elek, Oskar %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %A referee: Dachsbacher, Karsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Efficient Methods for Physically-based Rendering of Participating Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F94D-E %U urn:nbn:de:bsz:291-scidok-65357 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6535/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[17]
H. Hatefi Ardakani, “Finite Horizon Analysis of Markov Automata,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.
Export
BibTeX
@phdthesis{Hatefiphd17, TITLE = {Finite Horizon Analysis of {M}arkov Automata}, AUTHOR = {Hatefi Ardakani, Hassan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67438}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies.}, }
Endnote
%0 Thesis %A Hatefi Ardakani, Hassan %Y Hermanns, Holger %A referee: Buchholz, Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Finite Horizon Analysis of Markov Automata : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-9E81-C %U urn:nbn:de:bsz:291-scidok-67438 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P X, 175 p. %V phd %9 phd %X Markov automata constitute an expressive continuous-time compositional modelling formalism, featuring stochastic timing and nondeterministic as well as probabilistic branching, all supported in one model. They span as special cases, the models of discrete and continuous-time Markov chains, as well as interactive Markov chains and probabilistic automata. Moreover, they might be equipped with reward and resource structures in order to be used for analysing quantitative aspects of systems, like performance metrics, energy consumption, repair and maintenance costs. Due to their expressive nature, they serve as semantic backbones of engineering frameworks, control applications and safety critical systems. The Architecture Analysis and Design Language (AADL), Dynamic Fault Trees (DFT) and Generalised Stochastic Petri Nets (GSPN) are just some examples. Their expressiveness thus far prevents them from efficient analysis by stochastic solvers and probabilistic model checkers. A major problem context of this thesis lies in their analysis under some budget constraints, i.e. when only a finite budget of resources can be spent by the model. We study mathematical foundations of Markov automata since these are essential for the analysis addressed in this thesis. This includes, in particular, understanding their measurability and establishing their probability measure. Furthermore, we address the analysis of Markov automata in the presence of both reward acquisition and resource consumption within a finite budget of resources. More specifically, we put the problem of computing the optimal expected resource-bounded reward in our focus. In our general setting, we support transient, instantaneous and final reward collection as well as transient resource consumption. Our general formulation of the problem encompasses in particular the optimal time-bound reward and reachability as well as resource-bounded reachability. We develop a sound theory together with a stable approximation scheme with a strict error bound to solve the problem in an efficient way. We report on an implementation of our approach in a supporting tool and also demonstrate its effectiveness and usability over an extensive collection of industrial and academic case studies. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6743/
[18]
A.-C. Hauschild, “Computational Methods for Breath Metabolomics in Clinical Diagnostics,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Hauschild_PhD2016, TITLE = {Computational Methods for Breath Metabolomics in Clinical Diagnostics}, AUTHOR = {Hauschild, Anne-Christin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65874}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Hauschild, Anne-Christin %Y Helms, Volkhard %A referee: Baumbach, Jan %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Computational Methods for Breath Metabolomics in Clinical Diagnostics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0C18-7 %U urn:nbn:de:bsz:291-scidok-65874 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P 188 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6587/
[19]
P. Kellnhofer, “Perceptual Modeling for Stereoscopic 3D,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.
Export
BibTeX
@phdthesis{Kellnhoferphd2016, TITLE = {Perceptual Modeling for Stereoscopic {3D}}, AUTHOR = {Kellnhofer, Petr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66813}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort.}, }
Endnote
%0 Thesis %A Kellnhofer, Petr %Y Myszkowski, Karol %A referee: Seidel, Hans-Peter %A referee: Masia, Belen %A referee: Matusik, Wojciech %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Perceptual Modeling for Stereoscopic 3D : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-BBA6-1 %U urn:nbn:de:bsz:291-scidok-66813 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xxiv, 158 p. %V phd %9 phd %X Virtual and Augmented Reality applications typically rely on both stereoscopic presentation and involve intensive object and observer motion. A combination of high dynamic range and stereoscopic capabilities become popular for consumer displays, and is a desirable functionality of head mounted displays to come. The thesis is focused on complex interactions between all these visual cues on digital displays. The first part investigates challenges of the stereoscopic 3D and motion combination. We consider an interaction between the continuous motion presented as discrete frames. Then, we discuss a disparity processing for accurate reproduction of objects moving in the depth direction. Finally, we investigate the depth perception as a function of motion parallax and eye fixation changes by means of saccadic motion. The second part focuses on the role of high dynamic range imaging for stereoscopic displays. We go beyond the current display capabilities by considering the full perceivable luminance range and we simulate the real world experience in such adaptation conditions. In particular, we address the problems of disparity retargeting across such wide luminance ranges and reflective/refractive surface rendering. The core of our research methodology is perceptual modeling supported by our own experimental studies to overcome limitations of current display technologies and improve the viewer experience by enhancing perceived depth, reducing visual artifacts or improving viewing comfort. %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6681/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[20]
O. Klehm, “User-Guided Scene Stylization using Efficient Rendering Techniques,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Klehmphd2016, TITLE = {User-Guided Scene Stylization using Efficient Rendering Techniques}, AUTHOR = {Klehm, Oliver}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65321}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Klehm, Oliver %Y Seidel, Hans-Peter %A referee: Eisemann, Elmar %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T User-Guided Scene Stylization using Efficient Rendering Techniques : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-9C13-A %U urn:nbn:de:bsz:291-scidok-65321 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XIII, 111 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6532/
[21]
M. Košta, “New Concepts for Real Quantifier Elimination by Virtual Substitution,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Kostaphd16, TITLE = {New Concepts for Real Quantifier Elimination by Virtual Substitution}, AUTHOR = {Ko{\v s}ta, Marek}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Ko&#353;ta, Marek %Y Sturm, Thomas %A referee: Weber, Andreas %A referee: Weidenbach, Christoph %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations Automation of Logic, MPI for Informatics, Max Planck Society %T New Concepts for Real Quantifier Elimination by Virtual Substitution : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-30A8-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xvi, 214 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6716/
[22]
M. Künnemann, “Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Kuennemannphd2016, TITLE = {Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting}, AUTHOR = {K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65991}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A K&#252;nnemann, Marvin %Y Doerr, Benjamin %A referee: Mehlhorn, Kurt %A referee: Welzl, Emo %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Tight(er) Bounds for Similarity Measures, Smoothed Approximation and Broadcasting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-423A-3 %U urn:nbn:de:bsz:291-scidok-65991 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XI, 223 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6599/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[23]
S. Ott, “Algorithms for Classical and Modern Scheduling Problems,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Ott_PhD2016, TITLE = {Algorithms for Classical and Modern Scheduling Problems}, AUTHOR = {Ott, Sebastian}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65763}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Ott, Sebastian %Y Mehlhorn, Kurt %A referee: Huang, Chien-Chung %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Algorithms for Classical and Modern Scheduling Problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0C1B-1 %U urn:nbn:de:bsz:291-scidok-65763 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P IX, 109 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6576/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[24]
A. Pironti, “Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Pirontiphd16, TITLE = {Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies}, AUTHOR = {Pironti, Alejandro}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67190}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Pironti, Alejandro %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T Improving and Validating Data-driven Genotypic Interpretation Systems for the Selection of Antiretroviral Therapies : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-30D5-5 %U urn:nbn:de:bsz:291-scidok-67190 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P x, 272 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6719/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[25]
L. Pishchulin, “Articulated People Detection and Pose Estimation in Challenging Real World Environments,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{PishchulinPhD2016, TITLE = {Articulated People Detection and Pose Estimation in Challenging Real World Environments}, AUTHOR = {Pishchulin, Leonid}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-65478}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Pishchulin, Leonid %Y Schiele, Bernt %A referee: Theobalt, Christian %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Articulated People Detection and Pose Estimation in Challenging Real World Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-F000-B %U urn:nbn:de:bsz:291-scidok-65478 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XIII, 248 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6547/
[26]
S. S. Rangapuram, “Graph-based Methods for Unsupervised and Semi-supervised Data Analysis,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{rangphd17, TITLE = {Graph-based Methods for Unsupervised and Semi-supervised Data Analysis}, AUTHOR = {Rangapuram, Syama Sundar}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66590}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Rangapuram, Syama Sundar %Y Hein, Matthias %A referee: Hoai An, Le Thi %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Graph-based Methods for Unsupervised and Semi-supervised Data Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-9EA4-D %U urn:nbn:de:bsz:291-scidok-66590 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XI, 161 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6659/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[27]
B. Reinert, “Interactive, Example-driven Synthesis and Manipulation of Visual Media,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Reinertbphd17, TITLE = {Interactive, Example-driven Synthesis and Manipulation of Visual Media}, AUTHOR = {Reinert, Bernhard}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67660}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Reinert, Bernhard %Y Seidel, Hans-Peter %A referee: Ritschel, Tobias %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive, Example-driven Synthesis and Manipulation of Visual Media : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5A03-B %U urn:nbn:de:bsz:291-scidok-67660 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XX, 116, XVII p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6766/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[28]
H. Rhodin, “From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{RhodinPhD2016, TITLE = {From Motion Capture to Interactive Virtual Worlds: {T}owards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animatio}, AUTHOR = {Rhodin, Helge}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67413}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Rhodin, Helge %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %A referee: Bregler, Christoph %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T From Motion Capture to Interactive Virtual Worlds: Towards Unconstrained Motion-Capture Algorithms for Real-time Performance-Driven Character Animation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6310-C %U urn:nbn:de:bsz:291-scidok-67413 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P 179 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6741/
[29]
S. Sridhar, “Tracking Hands in Action for Gesture-based Computer Input,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{SridharPhD2016, TITLE = {Tracking Hands in Action for Gesture-based Computer Input}, AUTHOR = {Sridhar, Srinath}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67712}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Sridhar, Srinath %Y Theobalt, Christian %A referee: Oulasvirta, Antti %A referee: Schiele, Bernt %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Tracking Hands in Action for Gesture-based Computer Input : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-631C-3 %U urn:nbn:de:bsz:291-scidok-67712 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XXIII, 161 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6771/
[30]
N. Tandon, “Commonsense Knowledge Acquisition and Applications,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{TandonPhD2016, TITLE = {Commonsense Knowledge Acquisition and Applications}, AUTHOR = {Tandon, Niket}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66291}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Tandon, Niket %Y Weikum, Gerhard %A referee: Lieberman, Henry %A referee: Vreeken, Jilles %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Commonsense Knowledge Acquisition and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-78F6-A %U urn:nbn:de:bsz:291-scidok-66291 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XIV, 154 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6629/
[31]
C. Teflioudi, “Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Teflioudiphd2016, TITLE = {Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search}, AUTHOR = {Teflioudi, Christina}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-64699}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Teflioudi, Christina %Y Gemulla, Rainer %A referee: Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Algorithms for Shared-Memory Matrix Completion and Maximum Inner Product Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-43FA-2 %U urn:nbn:de:bsz:291-scidok-64699 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xi, 110 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6469/
[32]
K. Templin, “Depth, Shading, and Stylization in Stereoscopic Cinematograph,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{Templinphd15, TITLE = {Depth, Shading, and Stylization in Stereoscopic Cinematograph}, AUTHOR = {Templin, Krzysztof}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-64390}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Templin, Krzysztof %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Depth, Shading, and Stylization in Stereoscopic Cinematograph : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-19FA-2 %U urn:nbn:de:bsz:291-scidok-64390 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P xii, 100 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6439/
[33]
B. Turoňová, “Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{TuronovaPhD2016, TITLE = {Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography}, AUTHOR = {Turo{\v n}ov{\'a}, Beata}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-66400}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Turo&#328;ov&#225;, Beata %Y Slusallek, Philipp %A referee: Louis, Alfred K. %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Progressive Stochastic Reconstruction Technique for Cryo Electron Tomography : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-7898-F %U urn:nbn:de:bsz:291-scidok-66400 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P XI, 118 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2016/6640/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[34]
M. Yahya, “Question Answering and Query Processing for Extended Knowledge Graphs,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@phdthesis{yahyaphd2016, TITLE = {Question Answering and Query Processing for Extended Knowledge Graphs}, AUTHOR = {Yahya, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Yahya, Mohamed %Y Weikum, Gerhard %A referee: Sch&#252;tze, Hinrich %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Question Answering and Query Processing for Extended Knowledge Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-48C2-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %P x, 160 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2016/6476/