Publications of the International Max Planck Research School for Computer Science

Master Thesis

2016
[1]
M. Alzayat, “PolSim: Automatic Policy Validation via Meta-Data Flow Simulation,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Every year millions of confidential data records are leaked accidentally due to bugs, misconfiguration, or operator error. These incidents are common in large, complex, and fast evolving data processing systems. Ensuring compliance with data policies is a major challenge. Thoth is an information flow control system that uses coarse-grained taint tracking to control the flow of data. This is achieved by enforcing relevant declarative policies at processes boundaries. This enforcement is applicable regardless of bugs, misconfiguration, and compromises in application code, or actions by unprivileged operators. Designing policies that make sure all and only compliant flows are allowed remains a complex and error-prone process. In this work, we introduce PolSim, a simulation tool that aids system policy designers by validating the provided policies and systematically ensuring that the system allows all and only expected flows. Our proposed simulator approximates the dynamic run-time environment, semi-automatically suggests internal flow policies based on data flow, and provides debugging hints to help policy designers develop a working policy for the intended system before deployment.
Export
BibTeX
@mastersthesis{Alzayatmaster2016, TITLE = {Pol{S}im: Automatic Policy Validation via Meta-Data Flow Simulation}, AUTHOR = {Alzayat, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-09-27}, ABSTRACT = {Every year millions of confidential data records are leaked accidentally due to bugs, misconfiguration, or operator error. These incidents are common in large, complex, and fast evolving data processing systems. Ensuring compliance with data policies is a major challenge. Thoth is an information flow control system that uses coarse-grained taint tracking to control the flow of data. This is achieved by enforcing relevant declarative policies at processes boundaries. This enforcement is applicable regardless of bugs, misconfiguration, and compromises in application code, or actions by unprivileged operators. Designing policies that make sure all and only compliant flows are allowed remains a complex and error-prone process. In this work, we introduce PolSim, a simulation tool that aids system policy designers by validating the provided policies and systematically ensuring that the system allows all and only expected flows. Our proposed simulator approximates the dynamic run-time environment, semi-automatically suggests internal flow policies based on data flow, and provides debugging hints to help policy designers develop a working policy for the intended system before deployment.}, }
Endnote
%0 Thesis %A Alzayat, Mohamed %Y Druschel, Peter %A referee: Garg, Deepak %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Group P. Druschel, Max Planck Institute for Software Systems, Max Planck Society Group D. Garg, Max Planck Institute for Software Systems, Max Planck Society %T PolSim: Automatic Policy Validation via Meta-Data Flow Simulation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-ACCC-8 %I Universität des Saarlandes %C Saarbrücken %D 2016 %8 27.09.2016 %P 71 p. %V master %9 master %X Every year millions of confidential data records are leaked accidentally due to bugs, misconfiguration, or operator error. These incidents are common in large, complex, and fast evolving data processing systems. Ensuring compliance with data policies is a major challenge. Thoth is an information flow control system that uses coarse-grained taint tracking to control the flow of data. This is achieved by enforcing relevant declarative policies at processes boundaries. This enforcement is applicable regardless of bugs, misconfiguration, and compromises in application code, or actions by unprivileged operators. Designing policies that make sure all and only compliant flows are allowed remains a complex and error-prone process. In this work, we introduce PolSim, a simulation tool that aids system policy designers by validating the provided policies and systematically ensuring that the system allows all and only expected flows. Our proposed simulator approximates the dynamic run-time environment, semi-automatically suggests internal flow policies based on data flow, and provides debugging hints to help policy designers develop a working policy for the intended system before deployment.
[2]
S. Bozca, “Discrete Osmosis Methods for Image Processing,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Partial differential equations can model many physical phenomena and be used to simulate under computer. Osmosis, which is in the form of convection-diffusion equa- tion, has found itself many application areas in image processing. However, slow convergence of this model, which depends on incompatibility of the drift vector field used in the model, under current methods does not allow us to have a fast, and possibly real-time application area. Therefore, we get a deeper look into what incompatibility means and how it effects steady states of the osmosis process in this thesis. In addition, we evaluate several promising methods which offers substantial computational advantage over classical iterative methods.
Export
BibTeX
@mastersthesis{BozcaMaster2016, TITLE = {Discrete Osmosis Methods for Image Processing}, AUTHOR = {Bozca, Sinan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Partial differential equations can model many physical phenomena and be used to simulate under computer. Osmosis, which is in the form of convection-diffusion equa- tion, has found itself many application areas in image processing. However, slow convergence of this model, which depends on incompatibility of the drift vector field used in the model, under current methods does not allow us to have a fast, and possibly real-time application area. Therefore, we get a deeper look into what incompatibility means and how it effects steady states of the osmosis process in this thesis. In addition, we evaluate several promising methods which offers substantial computational advantage over classical iterative methods.}, }
Endnote
%0 Thesis %A Bozca, Sinan %Y Weickert, Joachim %A referee: Augustin, Matthias %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Universität des Saarlandes Universität des Saarlandes %T Discrete Osmosis Methods for Image Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41DD-1 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 43 p. %V master %9 master %X Partial differential equations can model many physical phenomena and be used to simulate under computer. Osmosis, which is in the form of convection-diffusion equa- tion, has found itself many application areas in image processing. However, slow convergence of this model, which depends on incompatibility of the drift vector field used in the model, under current methods does not allow us to have a fast, and possibly real-time application area. Therefore, we get a deeper look into what incompatibility means and how it effects steady states of the osmosis process in this thesis. In addition, we evaluate several promising methods which offers substantial computational advantage over classical iterative methods.
[3]
C. X. Chu, “Mining How-to Task Knowledge from Online Communities,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@mastersthesis{ChuMSc2016, TITLE = {Mining How-to Task Knowledge from Online Communities}, AUTHOR = {Chu, Cuong Xuan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Chu, Cuong Xuan %Y Weikum, Gerhard %A referee: Vreeken, Jilles %A referee: Tandon, Niket %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Mining How-to Task Knowledge from Online Communities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-491D-B %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 66 p. %V master %9 master
[4]
O. Darwish, “Market Equilibrium Computation for the Linear Arrow-Debreu Model,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
The problem of market equilibrium is defined as the problem of finding prices for the goods such that the supply in the market is equal to the demand. The problem is applicable to several market models, like the linear Arrow-Debreu model, which is one of the fundamental economic market models. Over the years, various algorithms have been developed to compute the market equilibrium of the linear Arrow-Debreu model. In 2013, Duan and Mehlhorn presented the first combinatorial polynomial time algorithm for computing the market equilibrium of this model. In this thesis, we optimize, generalize, and implement the Duan-Mehlhorn algorithm. We present a novel algorithm for computing balanced ows in equality networks, which is an application of parametric ows. This algorithm outperforms the current best algorithm for computing balanced ows; hence, it improves Duan-Mehlhorn's algorithm by almost a factor of n, which is the size of the network. Moreover, we generalize Duan-Mehlhorn's algorithm by relaxing some of its assumptions. Finally, we describe our approach for implementing Duan-Mehlhorn's algorithm. The preliminary results of our implementation - based on random utility instances - show that the running time of the implementation scales significantly better than the theoretical time complexity.
Export
BibTeX
@mastersthesis{DarwishMaster2016, TITLE = {Market Equilibrium Computation for the Linear Arrow-Debreu Model}, AUTHOR = {Darwish, Omar}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-03-31}, ABSTRACT = {The problem of market equilibrium is defined as the problem of finding prices for the goods such that the supply in the market is equal to the demand. The problem is applicable to several market models, like the linear Arrow-Debreu model, which is one of the fundamental economic market models. Over the years, various algorithms have been developed to compute the market equilibrium of the linear Arrow-Debreu model. In 2013, Duan and Mehlhorn presented the first combinatorial polynomial time algorithm for computing the market equilibrium of this model. In this thesis, we optimize, generalize, and implement the Duan-Mehlhorn algorithm. We present a novel algorithm for computing balanced ows in equality networks, which is an application of parametric ows. This algorithm outperforms the current best algorithm for computing balanced ows; hence, it improves Duan-Mehlhorn's algorithm by almost a factor of n, which is the size of the network. Moreover, we generalize Duan-Mehlhorn's algorithm by relaxing some of its assumptions. Finally, we describe our approach for implementing Duan-Mehlhorn's algorithm. The preliminary results of our implementation -- based on random utility instances -- show that the running time of the implementation scales significantly better than the theoretical time complexity.}, }
Endnote
%0 Thesis %A Darwish, Omar %Y Mehlhorn, Kurt %A referee: Hoefer, Martin %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Market Equilibrium Computation for the Linear Arrow-Debreu Model : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41D0-C %I Universität des Saarlandes %C Saarbrücken %D 2016 %8 31.03.2016 %P 73 p. %V master %9 master %X The problem of market equilibrium is defined as the problem of finding prices for the goods such that the supply in the market is equal to the demand. The problem is applicable to several market models, like the linear Arrow-Debreu model, which is one of the fundamental economic market models. Over the years, various algorithms have been developed to compute the market equilibrium of the linear Arrow-Debreu model. In 2013, Duan and Mehlhorn presented the first combinatorial polynomial time algorithm for computing the market equilibrium of this model. In this thesis, we optimize, generalize, and implement the Duan-Mehlhorn algorithm. We present a novel algorithm for computing balanced ows in equality networks, which is an application of parametric ows. This algorithm outperforms the current best algorithm for computing balanced ows; hence, it improves Duan-Mehlhorn's algorithm by almost a factor of n, which is the size of the network. Moreover, we generalize Duan-Mehlhorn's algorithm by relaxing some of its assumptions. Finally, we describe our approach for implementing Duan-Mehlhorn's algorithm. The preliminary results of our implementation - based on random utility instances - show that the running time of the implementation scales significantly better than the theoretical time complexity.
[5]
A. El-Korashy, “A Formal Model for Capability Machines An Illustrative Case Study towards Secure Compilation to CHERI,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Vulnerabilities in computer systems arise in part due to programmer's logical errors, and in part also due to programmer's false (i.e., over-optimistic) expectations about the guarantees that are given by the abstractions of a programming language. For the latter kind of vulnerabilities, architectures with hardware or instructionlevel support for protection mechanisms can be useful. One trend in computer systems protection is hardware-supported enforcement of security guarantees/policies. Capability-based machines are one instance of hardware-based protection mechanisms. CHERI is a recent implementation of a 64-bit MIPS-based capability architecture with byte-granularity memory protection. The goal of this thesis is to provide a paper formal model of the CHERI architecture with the aim of formal reasoning about the security guarantees that can be offered by the features of CHERI. We first give simplified instruction operational semantics, then we prove that capabilities are unforgeable in our model. Second, we show that existing techniques for enforcing control-flow integrity can be adapted to the CHERI ISA. Third, we show that one notion of memory compartmentalization can be achieved with the help of CHERI's memory protection. We conclude by suggesting other security building blocks that would be helpful to reason about, and laying down a plan for potentially using this work for building a secure compiler, i.e., a compiler that preserves security properties. The outlook and motivation for this work is to highlight the potential of using CHERI as a target architecture for secure compilation.
Export
BibTeX
@mastersthesis{El-KorashyMaster2016, TITLE = {A Formal Model for Capability Machines An Illustrative Case Study towards Secure Compilation to {CHERI}}, AUTHOR = {El-Korashy, Akram}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, ABSTRACT = {Vulnerabilities in computer systems arise in part due to programmer's logical errors, and in part also due to programmer's false (i.e., over-optimistic) expectations about the guarantees that are given by the abstractions of a programming language. For the latter kind of vulnerabilities, architectures with hardware or instructionlevel support for protection mechanisms can be useful. One trend in computer systems protection is hardware-supported enforcement of security guarantees/policies. Capability-based machines are one instance of hardware-based protection mechanisms. CHERI is a recent implementation of a 64-bit MIPS-based capability architecture with byte-granularity memory protection. The goal of this thesis is to provide a paper formal model of the CHERI architecture with the aim of formal reasoning about the security guarantees that can be offered by the features of CHERI. We first give simplified instruction operational semantics, then we prove that capabilities are unforgeable in our model. Second, we show that existing techniques for enforcing control-flow integrity can be adapted to the CHERI ISA. Third, we show that one notion of memory compartmentalization can be achieved with the help of CHERI's memory protection. We conclude by suggesting other security building blocks that would be helpful to reason about, and laying down a plan for potentially using this work for building a secure compiler, i.e., a compiler that preserves security properties. The outlook and motivation for this work is to highlight the potential of using CHERI as a target architecture for secure compilation.}, }
Endnote
%0 Thesis %A El-Korashy, Akram %Y Patrignani, Marco %A referee: Garg, Deepak %A referee: Reineke, Jan %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Max Planck Institute for Software Systems, Max Planck Society Max Planck Institute for Software Systems, Max Planck Society Universität des Saarlandes %T A Formal Model for Capability Machines An Illustrative Case Study towards Secure Compilation to CHERI : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41CA-B %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 89 p. %V master %9 master %X Vulnerabilities in computer systems arise in part due to programmer's logical errors, and in part also due to programmer's false (i.e., over-optimistic) expectations about the guarantees that are given by the abstractions of a programming language. For the latter kind of vulnerabilities, architectures with hardware or instructionlevel support for protection mechanisms can be useful. One trend in computer systems protection is hardware-supported enforcement of security guarantees/policies. Capability-based machines are one instance of hardware-based protection mechanisms. CHERI is a recent implementation of a 64-bit MIPS-based capability architecture with byte-granularity memory protection. The goal of this thesis is to provide a paper formal model of the CHERI architecture with the aim of formal reasoning about the security guarantees that can be offered by the features of CHERI. We first give simplified instruction operational semantics, then we prove that capabilities are unforgeable in our model. Second, we show that existing techniques for enforcing control-flow integrity can be adapted to the CHERI ISA. Third, we show that one notion of memory compartmentalization can be achieved with the help of CHERI's memory protection. We conclude by suggesting other security building blocks that would be helpful to reason about, and laying down a plan for potentially using this work for building a secure compiler, i.e., a compiler that preserves security properties. The outlook and motivation for this work is to highlight the potential of using CHERI as a target architecture for secure compilation.
[6]
A. Hanka, “Material Appearance Editing in Complex Volume and Surface Renderings,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.
Export
BibTeX
@mastersthesis{HankaMSc2016, TITLE = {Material Appearance Editing in Complex Volume and Surface Renderings}, AUTHOR = {Hanka, Adam}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-03-31}, ABSTRACT = {When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.}, }
Endnote
%0 Thesis %A Hanka, Adam %Y Ritschel, Tobias %A referee: Slusallek, Philipp %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Material Appearance Editing in Complex Volume and Surface Renderings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41E0-8 %I Universität des Saarlandes %C Saarbrücken %D 2016 %8 31.03.2016 %P 51 p. %V master %9 master %X When considering global illumination, material editing is a non-linear task and even in scenes with moderate complexity, the global nature of material editing makes final prediction of appearance of other objects in the scene a difficult task. In this thesis, a novel interactive method is proposed for object appearance design. To achieve this, a randomized per-pixel parametrization of scene materials is defined. At rendering time, parametrized materials have different properties for every pixel. This way, encoding of multiple rendered results into one image is obtained. We call this collection of data a hyperimage. Material editing means projecting the hyperimage onto a given parameter vector, which is achieved using non-linear weighted regression. Pixel guides based on geometry (normals, depth and unique object ID), materials and lighting properties of the scene enter the regression problem as pixel weights. In order to ensure that only relevant features are considered, a rendering-based feature selection method is introduced, which uses a precomputed pixelfeature function, encoding per-pixel importance of each parametrized material. The method of hyperimages is independent of the underlying rendering algorithm, while supporting a full global illumination and surface interactions. Our method is not limited to parametrization of materials, and can be extended to other scene properties. As an example, we show parametrization of position of an area light source.
[7]
A. Mokarian Forooshani, “Deep Learning for Filling Blanks in Image Captions,” Universität des Saarlandes, Saarbrücken, 2016.
Export
BibTeX
@mastersthesis{MokarianForooshaniMaster2016, TITLE = {Deep Learning for Filling Blanks in Image Captions}, AUTHOR = {Mokarian Forooshani, Ashkan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016}, }
Endnote
%0 Thesis %A Mokarian Forooshani, Ashkan %Y Fritz, Mario %A referee: Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Deep Learning for Filling Blanks in Image Captions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-1FA3-7 %I Universität des Saarlandes %C Saarbrücken %D 2016 %P 66 p. %V master %9 master
[8]
R. Sethi, “Evaluation of Population-Based Haplotype Phasing Algorithms,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
The valuable information in correct order of alleles on the haplotypes has many applications in GWAS studies and population genetics. A considerable number of computational and statistical algorithms have been developed for haplotype phasing. Historically, these algorithms were compared using the simulated population data with less dense markers which was inspired by genotype data from the HapMap project. Currently due to the advancement and reduction in cost of NGS, thousands of individuals across the world have been sequenced in 1000 Genomes Project. This has generated the genotype information of individuals from different ethnicity along with much denser genetic variations in them. Here, we have developed a scalable approach to assess state-of-the-art population-based haplotype phasing algorithms with benchmark data designed by simulation of the population (unrelated and related individuals), NGS pipeline and genotype calling. The most accurate algorithm was MVNCall (v1) for phase inference in unrelated individuals while DuoHMM approach of Shapeit (v2) had lowest switch error rate of 0.298 %(with true genotype likelihoods) in the related individuals. Moreover, we also conducted a comprehensive assessment of algorithms for the imputation of missing genotypes in the population with a reference panel. For this metrics, Impute2 (v2.3.2) and Beagle (v4.1) both performed competitively under different imputation scenarios and had genotype concordance rate of >99%. However, Impute2 was better in imputation of genotypes with minor allele frequency of <0.025 in the reference panel.
Export
BibTeX
@mastersthesis{SethiMaster2016, TITLE = {Evaluation of Population-Based Haplotype Phasing Algorithms}, AUTHOR = {Sethi, Riccha}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-03-09}, ABSTRACT = {The valuable information in correct order of alleles on the haplotypes has many applications in GWAS studies and population genetics. A considerable number of computational and statistical algorithms have been developed for haplotype phasing. Historically, these algorithms were compared using the simulated population data with less dense markers which was inspired by genotype data from the HapMap project. Currently due to the advancement and reduction in cost of NGS, thousands of individuals across the world have been sequenced in 1000 Genomes Project. This has generated the genotype information of individuals from different ethnicity along with much denser genetic variations in them. Here, we have developed a scalable approach to assess state-of-the-art population-based haplotype phasing algorithms with benchmark data designed by simulation of the population (unrelated and related individuals), NGS pipeline and genotype calling. The most accurate algorithm was MVNCall (v1) for phase inference in unrelated individuals while DuoHMM approach of Shapeit (v2) had lowest switch error rate of 0.298 %(with true genotype likelihoods) in the related individuals. Moreover, we also conducted a comprehensive assessment of algorithms for the imputation of missing genotypes in the population with a reference panel. For this metrics, Impute2 (v2.3.2) and Beagle (v4.1) both performed competitively under different imputation scenarios and had genotype concordance rate of >99%. However, Impute2 was better in imputation of genotypes with minor allele frequency of <0.025 in the reference panel.}, }
Endnote
%0 Thesis %A Sethi, Riccha %Y Marschall, Tobias %A referee: Pfeifer, Nico %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Evaluation of Population-Based Haplotype Phasing Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-41DA-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %8 09.03.2016 %P 76 p. %V master %9 master %X The valuable information in correct order of alleles on the haplotypes has many applications in GWAS studies and population genetics. A considerable number of computational and statistical algorithms have been developed for haplotype phasing. Historically, these algorithms were compared using the simulated population data with less dense markers which was inspired by genotype data from the HapMap project. Currently due to the advancement and reduction in cost of NGS, thousands of individuals across the world have been sequenced in 1000 Genomes Project. This has generated the genotype information of individuals from different ethnicity along with much denser genetic variations in them. Here, we have developed a scalable approach to assess state-of-the-art population-based haplotype phasing algorithms with benchmark data designed by simulation of the population (unrelated and related individuals), NGS pipeline and genotype calling. The most accurate algorithm was MVNCall (v1) for phase inference in unrelated individuals while DuoHMM approach of Shapeit (v2) had lowest switch error rate of 0.298 %(with true genotype likelihoods) in the related individuals. Moreover, we also conducted a comprehensive assessment of algorithms for the imputation of missing genotypes in the population with a reference panel. For this metrics, Impute2 (v2.3.2) and Beagle (v4.1) both performed competitively under different imputation scenarios and had genotype concordance rate of >99%. However, Impute2 was better in imputation of genotypes with minor allele frequency of <0.025 in the reference panel.
[9]
B. T. Teklehaimanot, “Virtualization of Video Streaming Functions,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Edgeware is a leading provider of video streaming solutions to network and service operators. The Edgeware Video Consolidation Platform(VCP) is a complete video streaming solution consisting of the Convoy Management system and Orbit streaming servers. The Orbit streaming servers are purpose designed hardware platforms which are composed of a dedicated hardware streaming engine and a purpose designed flash as a storage system. The Orbit streaming server is an accelerated HTTP streaming cache server which have up to 80 Gbps bandwidth and can stream to 128000 clients from a single rack unit. In line with the new trend of moving more and more functionalities towards a virtualized or software environment, the main goal of this thesis is to make a performance comparison between Edgeware’s Orbit streaming server and one of the best generic HTTP accelerators(reverse proxy severs) after implementing logging functionality of the Orbit on top of it. This is achieved by implementing test cases for the use cases that can help to evaluate those servers. Finally, after evaluating those proxy servers Varnish is selected and then compared the modified Varnish and Orbit to investigate the performance difference.
Export
BibTeX
@mastersthesis{TeklehaimanotMaster2016, TITLE = {Virtualization of Video Streaming Functions}, AUTHOR = {Teklehaimanot, Birhan Tadele}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-04-25}, ABSTRACT = {Edgeware is a leading provider of video streaming solutions to network and service operators. The Edgeware Video Consolidation Platform(VCP) is a complete video streaming solution consisting of the Convoy Management system and Orbit streaming servers. The Orbit streaming servers are purpose designed hardware platforms which are composed of a dedicated hardware streaming engine and a purpose designed flash as a storage system. The Orbit streaming server is an accelerated HTTP streaming cache server which have up to 80 Gbps bandwidth and can stream to 128000 clients from a single rack unit. In line with the new trend of moving more and more functionalities towards a virtualized or software environment, the main goal of this thesis is to make a performance comparison between Edgeware{\textquoteright}s Orbit streaming server and one of the best generic HTTP accelerators(reverse proxy severs) after implementing logging functionality of the Orbit on top of it. This is achieved by implementing test cases for the use cases that can help to evaluate those servers. Finally, after evaluating those proxy servers Varnish is selected and then compared the modified Varnish and Orbit to investigate the performance difference.}, }
Endnote
%0 Thesis %A Teklehaimanot, Birhan Tadele %Y Appelquist, G&#246;ran %A referee: Herfet, Thorsten %+ International Max Planck Research School, MPI for Informatics, Max Planck Society &#8206;CTO at Edgeware AB Universit&#228;t des Saarlandes %T Virtualization of Video Streaming Functions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-570D-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %8 25.04.2016 %P 58 p. %V master %9 master %X Edgeware is a leading provider of video streaming solutions to network and service operators. The Edgeware Video Consolidation Platform(VCP) is a complete video streaming solution consisting of the Convoy Management system and Orbit streaming servers. The Orbit streaming servers are purpose designed hardware platforms which are composed of a dedicated hardware streaming engine and a purpose designed flash as a storage system. The Orbit streaming server is an accelerated HTTP streaming cache server which have up to 80 Gbps bandwidth and can stream to 128000 clients from a single rack unit. In line with the new trend of moving more and more functionalities towards a virtualized or software environment, the main goal of this thesis is to make a performance comparison between Edgeware&#8217;s Orbit streaming server and one of the best generic HTTP accelerators(reverse proxy severs) after implementing logging functionality of the Orbit on top of it. This is achieved by implementing test cases for the use cases that can help to evaluate those servers. Finally, after evaluating those proxy servers Varnish is selected and then compared the modified Varnish and Orbit to investigate the performance difference.
[10]
M. Zheng, “Comparison of Software Tools for microRNA Next Generation Sequencing Data Analysis,” Universität des Saarlandes, Saarbrücken, 2016.
Abstract
Next-generation sequencing (NGS) appears to be very promising to study miRNAs comprehensively, which can not only profile known miRNAs, but also predict novel miRNAs. There are an increasing number of software tools developed for microRNA NGS data analysis. Nevertheless, an overall comparison of these tools is still rare and how divergent these software tools are is still unknown, which confuses the researchers to select an optimal tool. In our study, we performed a comprehensive comparison of seven representative software tools based on real data in various aspects, including detected known miRNAs, miRNAs abundance, differential expression and predicted novel miRNAs. We presented the divergences and similarities of these tools and gave some basic evaluation of the tools’ performances. In addition, some extreme cases in miRNAkey were explored. The comparison of these tools suggests that the performances of these software tools are very diverse and the caution is necessary to take when choosing a software tool. The summarization of the tools’ features and comparison of their performances in our study will provide useful information for the researchers to promote their selection of an appropriate software tool.
Export
BibTeX
@mastersthesis{ZhengMaster2016, TITLE = {Comparison of Software Tools for {microRNA} Next Generation Sequencing Data Analysis}, AUTHOR = {Zheng, Menglin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2016-09-12}, ABSTRACT = {Next-generation sequencing (NGS) appears to be very promising to study miRNAs comprehensively, which can not only profile known miRNAs, but also predict novel miRNAs. There are an increasing number of software tools developed for microRNA NGS data analysis. Nevertheless, an overall comparison of these tools is still rare and how divergent these software tools are is still unknown, which confuses the researchers to select an optimal tool. In our study, we performed a comprehensive comparison of seven representative software tools based on real data in various aspects, including detected known miRNAs, miRNAs abundance, differential expression and predicted novel miRNAs. We presented the divergences and similarities of these tools and gave some basic evaluation of the tools{\textquoteright} performances. In addition, some extreme cases in miRNAkey were explored. The comparison of these tools suggests that the performances of these software tools are very diverse and the caution is necessary to take when choosing a software tool. The summarization of the tools{\textquoteright} features and comparison of their performances in our study will provide useful information for the researchers to promote their selection of an appropriate software tool.}, }
Endnote
%0 Thesis %A Zheng, Menglin %Y Backes, Christina %A referee: Keller, Andreas %A referee: Meese, Eckart %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Clinical Bioinformatics, Saarland University Clinical Bioinformatics, Saarland University Institute of Human Genetics, Saarland University Homburg %T Comparison of Software Tools for microRNA Next Generation Sequencing Data Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-570A-6 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2016 %8 12.09.2016 %P 70 p. %V master %9 master %X Next-generation sequencing (NGS) appears to be very promising to study miRNAs comprehensively, which can not only profile known miRNAs, but also predict novel miRNAs. There are an increasing number of software tools developed for microRNA NGS data analysis. Nevertheless, an overall comparison of these tools is still rare and how divergent these software tools are is still unknown, which confuses the researchers to select an optimal tool. In our study, we performed a comprehensive comparison of seven representative software tools based on real data in various aspects, including detected known miRNAs, miRNAs abundance, differential expression and predicted novel miRNAs. We presented the divergences and similarities of these tools and gave some basic evaluation of the tools&#8217; performances. In addition, some extreme cases in miRNAkey were explored. The comparison of these tools suggests that the performances of these software tools are very diverse and the caution is necessary to take when choosing a software tool. The summarization of the tools&#8217; features and comparison of their performances in our study will provide useful information for the researchers to promote their selection of an appropriate software tool.
2015
[11]
G. Arvanitidis, “Robust Principal Component Analysis Based on the Trimmed Component Wise Reconstruction Error,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{Arvanitidis_Master2015, TITLE = {Robust Principal Component Analysis Based on the Trimmed Component Wise Reconstruction Error}, AUTHOR = {Arvanitidis, Georgios}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Arvanitidis, Georgios %Y Hein, Matthias %A referee: Schiele, Bernt %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Robust Principal Component Analysis Based on the Trimmed Component Wise Reconstruction Error : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-C7D2-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 85 p. %V master %9 master
[12]
D. Dedik, “Robust Type Classification of Out of Knowledge Base Entities,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{DedikMaster2015, TITLE = {Robust Type Classification of Out of Knowledge Base Entities}, AUTHOR = {Dedik, Darya}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Dedik, Darya %Y Weikum, Gerhard %A referee: Spaniol, Marc %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Robust Type Classification of Out of Knowledge Base Entities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-C0EC-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 65 p. %V master %9 master
[13]
Ö. Erensoy, “Semantic Model Extraction from Semi-Structured Textual Resources,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{ErensoyMaster2015, TITLE = {Semantic Model Extraction from Semi-Structured Textual Resources}, AUTHOR = {Erensoy, {\"O}zg{\"u}n}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Erensoy, &#214;zg&#252;n %Y Siekmann, J&#246;rg %A referee: Sosnovsky, Sergey %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Semantic Model Extraction from Semi-Structured Textual Resources : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-CC6C-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 52 p. %V master %9 master
[14]
M. Gad-Elrab, “AIDArabic+ Named Entity Disambiguation for Arabic Text,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{Gad-ElrabMaster2015, TITLE = {{AIDArabic}+ Named Entity Disambiguation for Arabic Text}, AUTHOR = {Gad-Elrab, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Gad-Elrab, Mohamed %Y Weikum, Gerhard %A referee: Berberich, Klaus %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T AIDArabic+ Named Entity Disambiguation for Arabic Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-0F70-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 56 p. %V master %9 master
[15]
M. Goyal, “Lumping of Approximate Master Equations for Networks,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{GoyalMaster2015, TITLE = {Lumping of Approximate Master Equations for Networks}, AUTHOR = {Goyal, Mayank}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Goyal, Mayank %Y Bortolussi, Luca %A referee: Wolf, Verena %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Lumping of Approximate Master Equations for Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-BB01-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 75 p. %V master %9 master
[16]
C. D. Hariman, “Part-Whole Commonsense Knowledge Harvesting from the Web,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{HarimanMaster2015, TITLE = {Part-Whole Commonsense Knowledge Harvesting from the Web}, AUTHOR = {Hariman, Charles Darwis}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Hariman, Charles Darwis %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Part-Whole Commonsense Knowledge Harvesting from the Web : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-C0E6-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 53 p. %V master %9 master
[17]
R. F. Hulea, “Compressed Vibration Modes for Deformable Objects,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{HuleaMaster2015, TITLE = {Compressed Vibration Modes for Deformable Objects}, AUTHOR = {Hulea, Razvan Florin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Hulea, Razvan Florin %Y Hildebrandt, Klaus %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Compressed Vibration Modes for Deformable Objects : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-2EAF-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 47 p. %V master %9 master
[18]
E. Kapcari, “Parallel vs. Traditional Faceted Browsing: Comparative Studies and Proposed Enhancements,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{KapcariMaster2015, TITLE = {Parallel vs. Traditional Faceted Browsing: Comparative Studies and Proposed Enhancements}, AUTHOR = {Kapcari, Edite}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Kapcari, Edite %Y Jameson, Anthony %A referee: Kr&#252;ger, Antonio %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Parallel vs. Traditional Faceted Browsing: Comparative Studies and Proposed Enhancements : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-BB2A-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %V master %9 master
[19]
U. Mahmood, “Ensuring Integrity of Recommendations in a Marketplace,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{MahmoodMaster2015, TITLE = {Ensuring Integrity of Recommendations in a Marketplace}, AUTHOR = {Mahmood, Uzair}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Mahmood, Uzair %Y Kate, Aniket %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Group M. Backes, Max Planck Institute for Software Systems, Max Planck Society %T Ensuring Integrity of Recommendations in a Marketplace : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-1545-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 58 p. %V master %9 master
[20]
P. Mandros, “Information Theoretic Supervised Feature Selection for Continuous Data,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{MandrosMaster2015, TITLE = {Information Theoretic Supervised Feature Selection for Continuous Data}, AUTHOR = {Mandros, Panagiotis}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Mandros, Panagiotis %Y Weikum, Gerhard %A referee: Vreeken, Jilles %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Information Theoretic Supervised Feature Selection for Continuous Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-BAF3-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 67 p. %V master %9 master
[21]
P. E. Mercado Lopez, “Clustering and Community Detection in Signed Networks,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{MercadoLopezMaster2014, TITLE = {Clustering and Community Detection in Signed Networks}, AUTHOR = {Mercado Lopez, Pedro Eduardo}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Mercado Lopez, Pedro Eduardo %Y Hein, Matthias %A referee: Andres, Bjoern %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Clustering and Community Detection in Signed Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-BAFD-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 161 p. %V master %9 master
[22]
N. Q. Nguyen, “A Tensor Block Coordinate Ascent Framework for Hyper Graph Matching,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{NguyenMaster2015, TITLE = {A Tensor Block Coordinate Ascent Framework for Hyper Graph Matching}, AUTHOR = {Nguyen, Ngoc Quynh}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Nguyen, Ngoc Quynh %Y Hein, Matthias %A referee: Weickert, Joachim %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Tensor Block Coordinate Ascent Framework for Hyper Graph Matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-BB10-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P 78 p. %V master %9 master
[23]
T. Zinchenko, “Redescription Mining Over non-Binary Data Sets Using Decision Trees,” Universität des Saarlandes, Saarbrücken, 2015.
Export
BibTeX
@mastersthesis{ZinchenkoMaster2014, TITLE = {Redescription Mining Over non-Binary Data Sets Using Decision Trees}, AUTHOR = {Zinchenko, Tetiana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2015}, MARGINALMARK = {$\bullet$}, DATE = {2015}, }
Endnote
%0 Thesis %A Zinchenko, Tetiana %Y Miettinen, Pauli %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Redescription Mining Over non-Binary Data Sets Using Decision Trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-B73A-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2015 %P X, 118 p. %V master %9 master
2014
[24]
H. A. S. Aslam, “Inter-Application Communication Testing of Android Applications Using Intent Fuzzing,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Testing is a crucial stage in the software development process that is used to uncover bugs and potential security threats. If not conducted thoroughly, buggy software may cause erroneous, malicious and even harmful behavior. Unfortunately in most software systems, testing is either completely neglected or not thoroughly conducted. One such example is Google's popular mobile platform, Android OS, where inter-application communication is not properly tested. This is because of the difficulty which it possesses in the development overhead and the manual labour required by developers in setting up the testing environment. Consequently, the lack of Android application testing continues to cause Android users to experience erroneous behavior and sudden crashes, impacting user experience and potentially resulting in financial losses. When a caller application attempts to communicate with a potentially buggy application, the caller application will suffer functional errors or it may even potentially crash. Incidentally, the user will complain that the caller application is not providing the promised functionality, resulting in a devaluation of the application's user rating. Successive failures will no longer be considered as isolated events, potentially crippling developer credibility of the calling application. In this thesis we present an automated tester for inter-application communication in Android applications. The approach used for testing is called Intent based Testing. Android applications are typically divided into multiple components that communicate via intents: messages passed through Android OS to coordinate operations between the different components. Intents are also used for inter-application communication, rendering them relevant for security. In this work, we designed and built a fully automated tool called IntentFuzzer, to test the stability of inter-application communication of Android applications using intents. Firstly, it statically analyzes the application to generate intents. Next, it tests the inter-application communication by fuzzing them, that is, injecting random input values that uncover unwanted behavior. In this way, we are able to expose several new defects including potential security issues which we discuss briefly in the Evaluation section.
Export
BibTeX
@mastersthesis{2014Aslam, TITLE = {Inter-Application Communication Testing of Android Applications Using Intent Fuzzing}, AUTHOR = {Aslam, Hafiz Ahmad Shahzad}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Testing is a crucial stage in the software development process that is used to uncover bugs and potential security threats. If not conducted thoroughly, buggy software may cause erroneous, malicious and even harmful behavior. Unfortunately in most software systems, testing is either completely neglected or not thoroughly conducted. One such example is Google's popular mobile platform, Android OS, where inter-application communication is not properly tested. This is because of the difficulty which it possesses in the development overhead and the manual labour required by developers in setting up the testing environment. Consequently, the lack of Android application testing continues to cause Android users to experience erroneous behavior and sudden crashes, impacting user experience and potentially resulting in financial losses. When a caller application attempts to communicate with a potentially buggy application, the caller application will suffer functional errors or it may even potentially crash. Incidentally, the user will complain that the caller application is not providing the promised functionality, resulting in a devaluation of the application's user rating. Successive failures will no longer be considered as isolated events, potentially crippling developer credibility of the calling application. In this thesis we present an automated tester for inter-application communication in Android applications. The approach used for testing is called Intent based Testing. Android applications are typically divided into multiple components that communicate via intents: messages passed through Android OS to coordinate operations between the different components. Intents are also used for inter-application communication, rendering them relevant for security. In this work, we designed and built a fully automated tool called IntentFuzzer, to test the stability of inter-application communication of Android applications using intents. Firstly, it statically analyzes the application to generate intents. Next, it tests the inter-application communication by fuzzing them, that is, injecting random input values that uncover unwanted behavior. In this way, we are able to expose several new defects including potential security issues which we discuss briefly in the Evaluation section.}, }
Endnote
%0 Thesis %A Aslam, Hafiz Ahmad Shahzad %Y Zeller, Andreas %A referee: Hammer, Christian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Inter-Application Communication Testing of Android Applications Using Intent Fuzzing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-C91D-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 51 p. %V master %9 master %X Testing is a crucial stage in the software development process that is used to uncover bugs and potential security threats. If not conducted thoroughly, buggy software may cause erroneous, malicious and even harmful behavior. Unfortunately in most software systems, testing is either completely neglected or not thoroughly conducted. One such example is Google's popular mobile platform, Android OS, where inter-application communication is not properly tested. This is because of the difficulty which it possesses in the development overhead and the manual labour required by developers in setting up the testing environment. Consequently, the lack of Android application testing continues to cause Android users to experience erroneous behavior and sudden crashes, impacting user experience and potentially resulting in financial losses. When a caller application attempts to communicate with a potentially buggy application, the caller application will suffer functional errors or it may even potentially crash. Incidentally, the user will complain that the caller application is not providing the promised functionality, resulting in a devaluation of the application's user rating. Successive failures will no longer be considered as isolated events, potentially crippling developer credibility of the calling application. In this thesis we present an automated tester for inter-application communication in Android applications. The approach used for testing is called Intent based Testing. Android applications are typically divided into multiple components that communicate via intents: messages passed through Android OS to coordinate operations between the different components. Intents are also used for inter-application communication, rendering them relevant for security. In this work, we designed and built a fully automated tool called IntentFuzzer, to test the stability of inter-application communication of Android applications using intents. Firstly, it statically analyzes the application to generate intents. Next, it tests the inter-application communication by fuzzing them, that is, injecting random input values that uncover unwanted behavior. In this way, we are able to expose several new defects including potential security issues which we discuss briefly in the Evaluation section.
[25]
I. Grishchenko, “Static Analysis of Android Applications,” Universität des Saarlandes, Saarbrücken, 2014.
Abstract
Mobile and portable devices are machines that users carry with them everywhere, they can be seen as constant personal assistants of modern human life. Today the Android operating system for mobile devices is the most popular one and the number of users still grows: as of September 2013, 1 billion devices have been activated [Goob]. This makes the Android market attractive for developers willing to provide new functionality. As a consequence, 48 billion applications ("apps") have been installed from the Google Play store [BBC]. Apps often require user data in order to perform the intended activity. At the same time parts of this data can be treated as sensitive private information, for instance, authentication credentials for accessing the bank account. The most significant built-in security measure in Android, the permission system, provides only little control on how the app is using the supplied data. In order to mitigate the threat mentioned above, the hidden unintended app activity, the recent research goes in three main directions: inline-reference monitoring modifies the app to make it safe according to user defined restrictions, dynamic analysis monitors the app execution in order to prevent undesired activity, and static analysis verifies the app properties from the app code prior to execution. As we want to have provable security guarantees before we execute the app, we focus on static analysis. This thesis presents a novel static analysis technique based on Horn clause resolution. In particular, we propose the small-step concrete semantics for Android apps, we develop a new form of abstraction which is supported by general theorem provers. Additionally, we have proved the soundness of our analysis technique. We have developed a tool that takes the bytecode of the Android app and makes it accessible to the theorem prover. This enables the automated verification of a variety of security properties, for instance, whether a certain functionality is preceded by a particular one, for instance, whether the output of a bank transaction is secured before sending it to the bank, or on which values it operates, for instance, whether the IP-address of the bank is the only possible transaction destination. A case study as well as a performance evaluation of our tool conclude this thesis.
Export
BibTeX
@mastersthesis{2014Grishchenko, TITLE = {Static Analysis of Android Applications}, AUTHOR = {Grishchenko, Ilya}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, ABSTRACT = {Mobile and portable devices are machines that users carry with them everywhere, they can be seen as constant personal assistants of modern human life. Today the Android operating system for mobile devices is the most popular one and the number of users still grows: as of September 2013, 1 billion devices have been activated [Goob]. This makes the Android market attractive for developers willing to provide new functionality. As a consequence, 48 billion applications ("apps") have been installed from the Google Play store [BBC]. Apps often require user data in order to perform the intended activity. At the same time parts of this data can be treated as sensitive private information, for instance, authentication credentials for accessing the bank account. The most significant built-in security measure in Android, the permission system, provides only little control on how the app is using the supplied data. In order to mitigate the threat mentioned above, the hidden unintended app activity, the recent research goes in three main directions: inline-reference monitoring modifies the app to make it safe according to user defined restrictions, dynamic analysis monitors the app execution in order to prevent undesired activity, and static analysis verifies the app properties from the app code prior to execution. As we want to have provable security guarantees before we execute the app, we focus on static analysis. This thesis presents a novel static analysis technique based on Horn clause resolution. In particular, we propose the small-step concrete semantics for Android apps, we develop a new form of abstraction which is supported by general theorem provers. Additionally, we have proved the soundness of our analysis technique. We have developed a tool that takes the bytecode of the Android app and makes it accessible to the theorem prover. This enables the automated verification of a variety of security properties, for instance, whether a certain functionality is preceded by a particular one, for instance, whether the output of a bank transaction is secured before sending it to the bank, or on which values it operates, for instance, whether the IP-address of the bank is the only possible transaction destination. A case study as well as a performance evaluation of our tool conclude this thesis.}, }
Endnote
%0 Thesis %A Grishchenko, Ilya %Y Maffei, Matteu %A referee: Hammer, Christian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Cluster of Excellence Multimodal Computing and Interaction External Organizations %T Static Analysis of Android Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-C962-6 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 96 p. %V master %9 master %X Mobile and portable devices are machines that users carry with them everywhere, they can be seen as constant personal assistants of modern human life. Today the Android operating system for mobile devices is the most popular one and the number of users still grows: as of September 2013, 1 billion devices have been activated [Goob]. This makes the Android market attractive for developers willing to provide new functionality. As a consequence, 48 billion applications ("apps") have been installed from the Google Play store [BBC]. Apps often require user data in order to perform the intended activity. At the same time parts of this data can be treated as sensitive private information, for instance, authentication credentials for accessing the bank account. The most significant built-in security measure in Android, the permission system, provides only little control on how the app is using the supplied data. In order to mitigate the threat mentioned above, the hidden unintended app activity, the recent research goes in three main directions: inline-reference monitoring modifies the app to make it safe according to user defined restrictions, dynamic analysis monitors the app execution in order to prevent undesired activity, and static analysis verifies the app properties from the app code prior to execution. As we want to have provable security guarantees before we execute the app, we focus on static analysis. This thesis presents a novel static analysis technique based on Horn clause resolution. In particular, we propose the small-step concrete semantics for Android apps, we develop a new form of abstraction which is supported by general theorem provers. Additionally, we have proved the soundness of our analysis technique. We have developed a tool that takes the bytecode of the Android app and makes it accessible to the theorem prover. This enables the automated verification of a variety of security properties, for instance, whether a certain functionality is preceded by a particular one, for instance, whether the output of a bank transaction is secured before sending it to the bank, or on which values it operates, for instance, whether the IP-address of the bank is the only possible transaction destination. A case study as well as a performance evaluation of our tool conclude this thesis.
[26]
A. Khoreva, “Video Segmentation with Graph Cuts,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@mastersthesis{KhorevaMaster, TITLE = {Video Segmentation with Graph Cuts}, AUTHOR = {Khoreva, Anna}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Khoreva, Anna %Y Schiele, Bernt %A referee: Hein, Matthias %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Video Segmentation with Graph Cuts : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5562-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P X, 71 p. %V master %9 master
[27]
M. Omran, “Pedestrian Detection Meets Stuff,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@mastersthesis{OmranMaster, TITLE = {Pedestrian Detection Meets Stuff}, AUTHOR = {Omran, Mohamed}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Omran, Mohamed %Y Benenson, Rodrigo %A referee: Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Pedestrian Detection Meets Stuff : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5477-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P VII, 75 p. %V master %9 master
[28]
J. Tiab, “Design and Evaluation Techniques for Cuttable Multi-touch Sensor Sheets,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@mastersthesis{TiabMastersThesis2014, TITLE = {Design and Evaluation Techniques for Cuttable Multi-touch Sensor Sheets}, AUTHOR = {Tiab, John}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Tiab, John %Y Steimle, J&#252;rgen %A referee: Kr&#252;ger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Design and Evaluation Techniques for Cuttable Multi-touch Sensor Sheets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5D8E-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %V master %9 master
[29]
M. P. Vidriales Escobar, “Integration of Direct- and Feature-Based Methods for Correspondences Refinement in Structure-from-Motion,” Universität des Saarlandes, Saarbrücken, 2014.
Export
BibTeX
@mastersthesis{VidrialesEscobarMaster2014, TITLE = {Integration of Direct- and Feature-Based Methods for Correspondences Refinement in Structure-from-Motion}, AUTHOR = {Vidriales Escobar, M{\'o}nica Paola}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2014}, DATE = {2014}, }
Endnote
%0 Thesis %A Vidriales Escobar, M&#243;nica Paola %Y Herfet, Thorsten %A referee: Slusallek, Philipp %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Integration of Direct- and Feature-Based Methods for Correspondences Refinement in Structure-from-Motion : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-CC96-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2014 %P 73 p. %V master %9 master
2013
[30]
E. Afsari Yeganeh, “Human Motion Alignment Using a Depth Camera,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@mastersthesis{Master2013:Elham, TITLE = {Human Motion Alignment Using a Depth Camera}, AUTHOR = {Afsari Yeganeh, Elham}, LANGUAGE = {eng}, LOCALID = {Local-ID: 9D35149C077B583BC1257BA20027DAB1-Master2013:Elham}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Afsari Yeganeh, Elham %Y Theobalt, Christian %A referee: Oulasvirta, Antti %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Human Motion Alignment Using a Depth Camera : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1740-2 %F OTHER: Local-ID: 9D35149C077B583BC1257BA20027DAB1-Master2013:Elham %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master
[31]
R. Belet, “Leveraging Independence and Locality for Random Forests in a Distributed Environment,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
With the emergence of big data, inducting regression trees on very large data sets became a common data mining task. Even though centralized algorithms for computing ensembles of Classification/Regression trees are a well studied machine learning/data mining problem, their distributed versions still raise scalability, efficiency and accuracy issues. Most state of the art tree learning algorithms require data to reside in memory on a single machine. Adopting this approach for trees on big data is not feasible as the limited resources provided by only one machine lead to scalability problems. While more scalable implementations of tree learning algorithms have been proposed, they typically require specialized parallel computing architectures rendering those algorithms complex and error-prone. In this thesis we will introduce two approaches to computing ensembles of regression trees on very large training data sets using the MapReduce framework as an underlying tool. The first approach employs the entire MapReduce cluster to parallely and fully distributedly learn tree ensembles. The second approach exploits locality and independence in the tree learning process.
Export
BibTeX
@mastersthesis{Belet2013, TITLE = {Leveraging Independence and Locality for Random Forests in a Distributed Environment}, AUTHOR = {Belet, Razvan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {With the emergence of big data, inducting regression trees on very large data sets became a common data mining task. Even though centralized algorithms for computing ensembles of Classification/Regression trees are a well studied machine learning/data mining problem, their distributed versions still raise scalability, efficiency and accuracy issues. Most state of the art tree learning algorithms require data to reside in memory on a single machine. Adopting this approach for trees on big data is not feasible as the limited resources provided by only one machine lead to scalability problems. While more scalable implementations of tree learning algorithms have been proposed, they typically require specialized parallel computing architectures rendering those algorithms complex and error-prone. In this thesis we will introduce two approaches to computing ensembles of regression trees on very large training data sets using the MapReduce framework as an underlying tool. The first approach employs the entire MapReduce cluster to parallely and fully distributedly learn tree ensembles. The second approach exploits locality and independence in the tree learning process.}, }
Endnote
%0 Thesis %A Belet, Razvan %Y Weikum, Gerhard %A referee: Schenkel, Ralf %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Leveraging Independence and Locality for Random Forests in a Distributed Environment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-97B8-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %P 132 p. %V master %9 master %X With the emergence of big data, inducting regression trees on very large data sets became a common data mining task. Even though centralized algorithms for computing ensembles of Classification/Regression trees are a well studied machine learning/data mining problem, their distributed versions still raise scalability, efficiency and accuracy issues. Most state of the art tree learning algorithms require data to reside in memory on a single machine. Adopting this approach for trees on big data is not feasible as the limited resources provided by only one machine lead to scalability problems. While more scalable implementations of tree learning algorithms have been proposed, they typically require specialized parallel computing architectures rendering those algorithms complex and error-prone. In this thesis we will introduce two approaches to computing ensembles of regression trees on very large training data sets using the MapReduce framework as an underlying tool. The first approach employs the entire MapReduce cluster to parallely and fully distributedly learn tree ensembles. The second approach exploits locality and independence in the tree learning process.
[32]
A. Boldyrev, “Dictionary-based Named Entity Recognition,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@mastersthesis{BoldyrevMastersThesis2013, TITLE = {Dictionary-based Named Entity Recognition}, AUTHOR = {Boldyrev, Artem}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Boldyrev, Artem %Y Weikum, Gerhard %A referee: Theobalt, Christian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Dictionary-based Named Entity Recognition : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-5C74-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master
[33]
L. Dinu, “Randomized Median-of-Three Trees,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
This thesis introduces a new type of randomized search trees based on the median-of-three improvement for quicksort (M3 quicksort). We consider the set of trees obtained by running M3 quicksort. This thesis show how to obtain them by a slightly changed insertion procedure for binary search trees. Furthermore, if the input is random, it generates the same probability distribution as M3 quicksort and consequently accesses in the tree are faster than for randomized search trees. In order to maintain randomness for any type of input sequence, we introduce the concept of support nodes, which define a path covering of the tree. With their help, and by storing the subtree size at each node, random updates take O(log n). If instead of subtree sizes, each node stores a random priority, updates take O(log2 n). Experiments show that while accesses are indeed faster, update times take however too long for the method to be competitive.
Export
BibTeX
@mastersthesis{Dinu2013, TITLE = {Randomized Median-of-Three Trees}, AUTHOR = {Dinu, Lavinia}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {This thesis introduces a new type of randomized search trees based on the median-of-three improvement for quicksort (M3 quicksort). We consider the set of trees obtained by running M3 quicksort. This thesis show how to obtain them by a slightly changed insertion procedure for binary search trees. Furthermore, if the input is random, it generates the same probability distribution as M3 quicksort and consequently accesses in the tree are faster than for randomized search trees. In order to maintain randomness for any type of input sequence, we introduce the concept of support nodes, which define a path covering of the tree. With their help, and by storing the subtree size at each node, random updates take O(log n). If instead of subtree sizes, each node stores a random priority, updates take O(log2 n). Experiments show that while accesses are indeed faster, update times take however too long for the method to be competitive.}, }
Endnote
%0 Thesis %A Dinu, Lavinia %Y Seidel, Raimund %A referee: Bl&#228;ser, Markus %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Randomized Median-of-Three Trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CB5B-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master %X This thesis introduces a new type of randomized search trees based on the median-of-three improvement for quicksort (M3 quicksort). We consider the set of trees obtained by running M3 quicksort. This thesis show how to obtain them by a slightly changed insertion procedure for binary search trees. Furthermore, if the input is random, it generates the same probability distribution as M3 quicksort and consequently accesses in the tree are faster than for randomized search trees. In order to maintain randomness for any type of input sequence, we introduce the concept of support nodes, which define a path covering of the tree. With their help, and by storing the subtree size at each node, random updates take O(log n). If instead of subtree sizes, each node stores a random priority, updates take O(log2 n). Experiments show that while accesses are indeed faster, update times take however too long for the method to be competitive.
[34]
M. Eghbali, “Facial Performance Capture Using a Single Kinect Camera,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@mastersthesis{2013Master:Eghbali, TITLE = {Facial Performance Capture Using a Single Kinect Camera}, AUTHOR = {Eghbali, Mandana}, LANGUAGE = {eng}, LOCALID = {Local-ID: 65B027251CF9FF39C1257BEF002B2460-2013Master:Eghbali}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Eghbali, Mandana %Y Theobalt, Christian %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Facial Performance Capture Using a Single Kinect Camera : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-3D53-E %F OTHER: Local-ID: 65B027251CF9FF39C1257BEF002B2460-2013Master:Eghbali %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master
[35]
E. Ilieva, “Analyzing and Creating Top-k Entity Rankings,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@mastersthesis{Ilieva2013, TITLE = {Analyzing and Creating Top-k Entity Rankings}, AUTHOR = {Ilieva, Evica}, LANGUAGE = {eng}, LOCALID = {Local-ID: DDA2710C9D0C5B92C1257BF00027BC81-Ilieva2013z}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Ilieva, Evica %Y Michel, Sebastian %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Analyzing and Creating Top-k Entity Rankings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0015-1AC1-8 %F OTHER: Local-ID: DDA2710C9D0C5B92C1257BF00027BC81-Ilieva2013z %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %P 67 p. %V master %9 master
[36]
P. Kolev, “Community Analysis Using Local Random Walks,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
The problem of graph clustering is a central optimization problem with various applications in numerous fields including computational biology, machine learning, computer vision, data mining, social network analysis, VLSI design and many more. Essentially, clustering refers to grouping objects with similar properties in the same cluster. Designing an appropriate similarity measure is currently a state of the art process and it is highly depended on the underlying application. Generally speaking, the problem of graph clustering asks to find subsets of vertices that are well-connected inside and sparsely connected outside. Motivated by large-scale graph clustering, we investigate local algorithms, based on random walks, that find a set of vertices near a given starting vertex with good worst case approximation guarantees. The running time of these algorithms is nearly linear in the size of the output set and is independent of the size of the whole graph. This feature makes them perfect subroutines in the design of efficient parallel algorithms for graph clustering.
Export
BibTeX
@mastersthesis{Kolev2013, TITLE = {Community Analysis Using Local Random Walks}, AUTHOR = {Kolev, Pavel}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {The problem of graph clustering is a central optimization problem with various applications in numerous fields including computational biology, machine learning, computer vision, data mining, social network analysis, VLSI design and many more. Essentially, clustering refers to grouping objects with similar properties in the same cluster. Designing an appropriate similarity measure is currently a state of the art process and it is highly depended on the underlying application. Generally speaking, the problem of graph clustering asks to find subsets of vertices that are well-connected inside and sparsely connected outside. Motivated by large-scale graph clustering, we investigate local algorithms, based on random walks, that find a set of vertices near a given starting vertex with good worst case approximation guarantees. The running time of these algorithms is nearly linear in the size of the output set and is independent of the size of the whole graph. This feature makes them perfect subroutines in the design of efficient parallel algorithms for graph clustering.}, }
Endnote
%0 Thesis %A Kolev, Pavel %Y Mehlhorn, Kurt %A referee: Sun, He %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Community Analysis Using Local Random Walks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-97AE-6 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master %X The problem of graph clustering is a central optimization problem with various applications in numerous fields including computational biology, machine learning, computer vision, data mining, social network analysis, VLSI design and many more. Essentially, clustering refers to grouping objects with similar properties in the same cluster. Designing an appropriate similarity measure is currently a state of the art process and it is highly depended on the underlying application. Generally speaking, the problem of graph clustering asks to find subsets of vertices that are well-connected inside and sparsely connected outside. Motivated by large-scale graph clustering, we investigate local algorithms, based on random walks, that find a set of vertices near a given starting vertex with good worst case approximation guarantees. The running time of these algorithms is nearly linear in the size of the output set and is independent of the size of the whole graph. This feature makes them perfect subroutines in the design of efficient parallel algorithms for graph clustering.
[37]
E. Levinkov, “Scene Segmentation in Adverse Vision Conditions,” Universität des Saarlandes, Saarbrücken, 2013.
Export
BibTeX
@mastersthesis{LevinkovMaster2013, TITLE = {Scene Segmentation in Adverse Vision Conditions}, AUTHOR = {Levinkov, Evgeny}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, }
Endnote
%0 Thesis %A Levinkov, Evgeny %Y Schiele, Bernt %A referee: Fritz, Mario %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Scene Segmentation in Adverse Vision Conditions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-3705-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %P XVII, 81 p. %V master %9 master
[38]
V. Mukha, “Real-time Display Reconfiguration within Multi-display Environments,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Multi-display environments (MDEs) of all kinds are used a lot nowadays. A wide variety of devices helps to build a common display space. TVs, monitors, projected surfaces, phones, tablets, everything that has the ability to display visual information can be incorporated in multi-display environments. While the main research emphasis so far has been on interaction techniques and user experience within different MDEs, some research topics are dealing with static and dynamic display reconfiguration. In fact, several studies already work with MDEs that are capable of display reconfiguration on-the-fly. Different frameworks can perform splitting, streaming and rendering of visual data on large-scale displays with the ability of dynamic display reconfiguration to calibrate multiple-projectors or to combine different heterogeneous displays into one display wall dynamically. However, all of these frameworks require different approaches for display reconfiguration. Our goal is to create a model for display reconfiguration which will be abstract, transparent, will work in real-time, and will be easily deployable in any MDE. In this work we present an extension to a software framework called Display as a Service (DaaS). This extension is represented as a model for real-time display reconfiguration using DaaS. The DaaS framework allows for generic and transparent management of pixel-transport assuming only a network connection, providing a simple high-level implementation for pixel-producing and pixel-displaying applications. The main limitation of this approach is a certain delay between pixel generation and display. However, the video encoding and network transport are subject of improvements which will solve the problem in the future. As a proof of concept, we demonstrate three usage scenarios: manual dynamic display reconfiguration, automatic display calibration, and real-time display tracking. We also present a new algorithm for precise display calibration using markers and a handheld camera. The calibration results are evaluated using different tracking libraries. The additional precise calibration part for our proposed algorithm makes the calibration accuracy several times better compared to a naive approach.
Export
BibTeX
@mastersthesis{Mukha2013, TITLE = {Real-time Display Reconfiguration within Multi-display Environments}, AUTHOR = {Mukha, Victor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Multi-display environments (MDEs) of all kinds are used a lot nowadays. A wide variety of devices helps to build a common display space. TVs, monitors, projected surfaces, phones, tablets, everything that has the ability to display visual information can be incorporated in multi-display environments. While the main research emphasis so far has been on interaction techniques and user experience within different MDEs, some research topics are dealing with static and dynamic display reconfiguration. In fact, several studies already work with MDEs that are capable of display reconfiguration on-the-fly. Different frameworks can perform splitting, streaming and rendering of visual data on large-scale displays with the ability of dynamic display reconfiguration to calibrate multiple-projectors or to combine different heterogeneous displays into one display wall dynamically. However, all of these frameworks require different approaches for display reconfiguration. Our goal is to create a model for display reconfiguration which will be abstract, transparent, will work in real-time, and will be easily deployable in any MDE. In this work we present an extension to a software framework called Display as a Service (DaaS). This extension is represented as a model for real-time display reconfiguration using DaaS. The DaaS framework allows for generic and transparent management of pixel-transport assuming only a network connection, providing a simple high-level implementation for pixel-producing and pixel-displaying applications. The main limitation of this approach is a certain delay between pixel generation and display. However, the video encoding and network transport are subject of improvements which will solve the problem in the future. As a proof of concept, we demonstrate three usage scenarios: manual dynamic display reconfiguration, automatic display calibration, and real-time display tracking. We also present a new algorithm for precise display calibration using markers and a handheld camera. The calibration results are evaluated using different tracking libraries. The additional precise calibration part for our proposed algorithm makes the calibration accuracy several times better compared to a naive approach.}, }
Endnote
%0 Thesis %A Mukha, Victor %Y Slusallek, Philip %A referee: Steimle, J&#252;rgen %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Real-time Display Reconfiguration within Multi-display Environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0028-1522-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master %X Multi-display environments (MDEs) of all kinds are used a lot nowadays. A wide variety of devices helps to build a common display space. TVs, monitors, projected surfaces, phones, tablets, everything that has the ability to display visual information can be incorporated in multi-display environments. While the main research emphasis so far has been on interaction techniques and user experience within different MDEs, some research topics are dealing with static and dynamic display reconfiguration. In fact, several studies already work with MDEs that are capable of display reconfiguration on-the-fly. Different frameworks can perform splitting, streaming and rendering of visual data on large-scale displays with the ability of dynamic display reconfiguration to calibrate multiple-projectors or to combine different heterogeneous displays into one display wall dynamically. However, all of these frameworks require different approaches for display reconfiguration. Our goal is to create a model for display reconfiguration which will be abstract, transparent, will work in real-time, and will be easily deployable in any MDE. In this work we present an extension to a software framework called Display as a Service (DaaS). This extension is represented as a model for real-time display reconfiguration using DaaS. The DaaS framework allows for generic and transparent management of pixel-transport assuming only a network connection, providing a simple high-level implementation for pixel-producing and pixel-displaying applications. The main limitation of this approach is a certain delay between pixel generation and display. However, the video encoding and network transport are subject of improvements which will solve the problem in the future. As a proof of concept, we demonstrate three usage scenarios: manual dynamic display reconfiguration, automatic display calibration, and real-time display tracking. We also present a new algorithm for precise display calibration using markers and a handheld camera. The calibration results are evaluated using different tracking libraries. The additional precise calibration part for our proposed algorithm makes the calibration accuracy several times better compared to a naive approach.
[39]
A. Podosinnikova, “Robust Principal Component Analysis as a Nonlinear Eigenproblem,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Principal Component Analysis (PCA) is a widely used tool for, e.g., exploratory data analysis, dimensionality reduction and clustering. However, it is well known that PCA is strongly aected by the presence of outliers and, thus, is vulnerable to both gross measurement error and adversarial manipulation of the data. This phenomenon motivates the development of robust PCA as the problem of recovering the principal components of the uncontaminated data. In this thesis, we propose two new algorithms, QRPCA and MDRPCA, for robust PCA components based on the projection-pursuit approach of Huber. While the resulting optimization problems are non-convex and non-smooth, we show that they can be eciently minimized via the RatioDCA using bundle methods/accelerated proximal methods for the interior problem. The key ingredient for the most promising algorithm (QRPCA) is a robust, location invariant scale measure with breakdown point 0.5. Extensive experiments show that our QRPCA is competitive with current state-of-the-art methods and outperforms other methods in particular for a large number of outliers.
Export
BibTeX
@mastersthesis{Podosinnikova2013, TITLE = {Robust Principal Component Analysis as a Nonlinear Eigenproblem}, AUTHOR = {Podosinnikova, Anastasia}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Principal Component Analysis (PCA) is a widely used tool for, e.g., exploratory data analysis, dimensionality reduction and clustering. However, it is well known that PCA is strongly aected by the presence of outliers and, thus, is vulnerable to both gross measurement error and adversarial manipulation of the data. This phenomenon motivates the development of robust PCA as the problem of recovering the principal components of the uncontaminated data. In this thesis, we propose two new algorithms, QRPCA and MDRPCA, for robust PCA components based on the projection-pursuit approach of Huber. While the resulting optimization problems are non-convex and non-smooth, we show that they can be eciently minimized via the RatioDCA using bundle methods/accelerated proximal methods for the interior problem. The key ingredient for the most promising algorithm (QRPCA) is a robust, location invariant scale measure with breakdown point 0.5. Extensive experiments show that our QRPCA is competitive with current state-of-the-art methods and outperforms other methods in particular for a large number of outliers.}, }
Endnote
%0 Thesis %A Podosinnikova, Anastasia %Y Hein, Matthias %A referee: Gemulla, Rainer %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Robust Principal Component Analysis as a Nonlinear Eigenproblem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CC75-A %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master %X Principal Component Analysis (PCA) is a widely used tool for, e.g., exploratory data analysis, dimensionality reduction and clustering. However, it is well known that PCA is strongly aected by the presence of outliers and, thus, is vulnerable to both gross measurement error and adversarial manipulation of the data. This phenomenon motivates the development of robust PCA as the problem of recovering the principal components of the uncontaminated data. In this thesis, we propose two new algorithms, QRPCA and MDRPCA, for robust PCA components based on the projection-pursuit approach of Huber. While the resulting optimization problems are non-convex and non-smooth, we show that they can be eciently minimized via the RatioDCA using bundle methods/accelerated proximal methods for the interior problem. The key ingredient for the most promising algorithm (QRPCA) is a robust, location invariant scale measure with breakdown point 0.5. Extensive experiments show that our QRPCA is competitive with current state-of-the-art methods and outperforms other methods in particular for a large number of outliers.
[40]
M. Reznitskii, “Stereo Vision under Adverse Conditions,” Universität des Saarlandes, Saarbrücken, 2013.
Abstract
Autonomous Driving benefits strongly from a 3D reconstruction of the environment in real-time, often obtained via stereo vision. Semi-Global Matching (SGM) is a popular�method of choice for solving this task and is already in use for production vehicles. Despite the enormous progress in the field and the high performance of modern methods, one key challenge remains: stereo vision in automotive scenarios during difficult weather or illumination conditions. Current methods generate strong temporal noise,�many disparity outliers, and false positives on a segmentation level. This work addresses these issues by formulating a temporal prior and a scene prior and applying them to SGM. For image sequences captured on a highway during rain, during snowfall, or in low light, these priors significantly improve the object detection rate while reducing the false positive rate. The algorithm also outperforms the�ECCV Robust�Vision Challenge winner, iSGM.�
Export
BibTeX
@mastersthesis{Reznitskii2013, TITLE = {Stereo Vision under Adverse Conditions}, AUTHOR = {Reznitskii, Maxim}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, DATE = {2013}, ABSTRACT = {Autonomous Driving benefits strongly from a 3D reconstruction of the environment in real-time, often obtained via stereo vision. Semi-Global Matching (SGM) is a popular{\diamond}method of choice for solving this task and is already in use for production vehicles. Despite the enormous progress in the field and the high performance of modern methods, one key challenge remains: stereo vision in automotive scenarios during difficult weather or illumination conditions. Current methods generate strong temporal noise,{\diamond}many disparity outliers, and false positives on a segmentation level. This work addresses these issues by formulating a temporal prior and a scene prior and applying them to SGM. For image sequences captured on a highway during rain, during snowfall, or in low light, these priors significantly improve the object detection rate while reducing the false positive rate. The algorithm also outperforms the{\diamond}ECCV Robust{\diamond}Vision Challenge winner, iSGM.{\diamond}}, }
Endnote
%0 Thesis %A Reznitskii, Maxim %Y Weikert, Joachim %A referee: Schiele, Bernt %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Stereo Vision under Adverse Conditions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CC7E-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2013 %V master %9 master %X Autonomous Driving benefits strongly from a 3D reconstruction of the environment in real-time, often obtained via stereo vision. Semi-Global Matching (SGM) is a popular&#65533;method of choice for solving this task and is already in use for production vehicles. Despite the enormous progress in the field and the high performance of modern methods, one key challenge remains: stereo vision in automotive scenarios during difficult weather or illumination conditions. Current methods generate strong temporal noise,&#65533;many disparity outliers, and false positives on a segmentation level. This work addresses these issues by formulating a temporal prior and a scene prior and applying them to SGM. For image sequences captured on a highway during rain, during snowfall, or in low light, these priors significantly improve the object detection rate while reducing the false positive rate. The algorithm also outperforms the&#65533;ECCV Robust&#65533;Vision Challenge winner, iSGM.&#65533;
2012
[41]
N. Alcaraz Milman, “KeyPathwayMiner - Detecting Case-specific Biological Pathways by Using Expression Data,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Advances in the field of systems biology have provided the biological community with massive amounts of pathway data that describe the interplay of genes and their products. The resulting biological networks usually consist of thousands of entities and interactions that can be modeled mathematically as graphs. Since these networks only provide a static picture of the accumulated knowledge, pathways that are affected during development of complex diseases cannot be extracted easily. This gap can be lled by means of OMICS technologies such as DNA microarrays, which measure the activity of genes and proteins under different conditions. Integration of both interaction and expression datasets can increase the quality and accuracy of analysis when compared to independant inspection of each. However, sophisticated computational methods are needed to deal with the size of the datasets while also accounting for the presence of biological and technological noise inherent in the data generating process. In this dissertation the KeyPathwayMiner is presented, a method that enables the extraction and visualization of affected pathways given the results of a series of gene expression studies. Specically, given network and gene expression data, KeyPathwayMiner identies those maximal subgraphs where all but k nodes of the subnetwork are differentially expressed in all but at most l cases in the gene expression data. This new formulation allows users to control the number of outliers with two parameters that provide good interpretability of the solutions. Since identifying these subgraphs is computationally intensive, an heuristic algorithm based on Ant Colony Optimization was designed and adapted to this problem, where solutions are reported in the order of seconds on a standard personal computer. The Key-PathwayMiner was tested on real Huntingtons Disease and Breast Cancer datasets, where it is able to extract pathways containing a large percentage of known relevant genes when compared to other similar approaches. KeyPathwayMiner has been implemented as a plugin for Cytoscape, one of the most widely used open source biological network analysis and visualization platforms. The Key-PathwayMiner is available online at http://keypathwayminer.mpi-inf.mpg.de or through the plugin manager of Cytoscape.
Export
BibTeX
@mastersthesis{AlcarazMilman2012, TITLE = {{K}ey{P}athway{M}iner -- Detecting Case-specific Biological Pathways by Using Expression Data}, AUTHOR = {Alcaraz Milman, Nicolas}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Advances in the field of systems biology have provided the biological community with massive amounts of pathway data that describe the interplay of genes and their products. The resulting biological networks usually consist of thousands of entities and interactions that can be modeled mathematically as graphs. Since these networks only provide a static picture of the accumulated knowledge, pathways that are affected during development of complex diseases cannot be extracted easily. This gap can be lled by means of OMICS technologies such as DNA microarrays, which measure the activity of genes and proteins under different conditions. Integration of both interaction and expression datasets can increase the quality and accuracy of analysis when compared to independant inspection of each. However, sophisticated computational methods are needed to deal with the size of the datasets while also accounting for the presence of biological and technological noise inherent in the data generating process. In this dissertation the KeyPathwayMiner is presented, a method that enables the extraction and visualization of affected pathways given the results of a series of gene expression studies. Specically, given network and gene expression data, KeyPathwayMiner identies those maximal subgraphs where all but k nodes of the subnetwork are differentially expressed in all but at most l cases in the gene expression data. This new formulation allows users to control the number of outliers with two parameters that provide good interpretability of the solutions. Since identifying these subgraphs is computationally intensive, an heuristic algorithm based on Ant Colony Optimization was designed and adapted to this problem, where solutions are reported in the order of seconds on a standard personal computer. The Key-PathwayMiner was tested on real Huntingtons Disease and Breast Cancer datasets, where it is able to extract pathways containing a large percentage of known relevant genes when compared to other similar approaches. KeyPathwayMiner has been implemented as a plugin for Cytoscape, one of the most widely used open source biological network analysis and visualization platforms. The Key-PathwayMiner is available online at http://keypathwayminer.mpi-inf.mpg.de or through the plugin manager of Cytoscape.}, }
Endnote
%0 Thesis %A Alcaraz Milman, Nicolas %Y Baumbach, Jan %A referee: Helms, Volkhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations %T KeyPathwayMiner - Detecting Case-specific Biological Pathways by Using Expression Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CC8A-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X Advances in the field of systems biology have provided the biological community with massive amounts of pathway data that describe the interplay of genes and their products. The resulting biological networks usually consist of thousands of entities and interactions that can be modeled mathematically as graphs. Since these networks only provide a static picture of the accumulated knowledge, pathways that are affected during development of complex diseases cannot be extracted easily. This gap can be lled by means of OMICS technologies such as DNA microarrays, which measure the activity of genes and proteins under different conditions. Integration of both interaction and expression datasets can increase the quality and accuracy of analysis when compared to independant inspection of each. However, sophisticated computational methods are needed to deal with the size of the datasets while also accounting for the presence of biological and technological noise inherent in the data generating process. In this dissertation the KeyPathwayMiner is presented, a method that enables the extraction and visualization of affected pathways given the results of a series of gene expression studies. Specically, given network and gene expression data, KeyPathwayMiner identies those maximal subgraphs where all but k nodes of the subnetwork are differentially expressed in all but at most l cases in the gene expression data. This new formulation allows users to control the number of outliers with two parameters that provide good interpretability of the solutions. Since identifying these subgraphs is computationally intensive, an heuristic algorithm based on Ant Colony Optimization was designed and adapted to this problem, where solutions are reported in the order of seconds on a standard personal computer. The Key-PathwayMiner was tested on real Huntingtons Disease and Breast Cancer datasets, where it is able to extract pathways containing a large percentage of known relevant genes when compared to other similar approaches. KeyPathwayMiner has been implemented as a plugin for Cytoscape, one of the most widely used open source biological network analysis and visualization platforms. The Key-PathwayMiner is available online at http://keypathwayminer.mpi-inf.mpg.de or through the plugin manager of Cytoscape.
[42]
N. Arvanitopoulos-Darginis, “Aggregation of Multiple Clusterings and Active Learning in a Transductive Setting,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
In this work we proposed a novel transductive method to solve the problem of learning from partially labeled data. Our main idea was to aggregate information obtained from several clusterings to infer the labels of the unlabeled data. While our method is not restricted to a specific clustering method, we chose to use in our experiments the normalized variant of 1-spectral clustering, which was demonstrated to produce in most cases better clusterings than the standard spectral clustering method. Our approach yielded results which were at least comparable to, and in some cases even significantly better than the best results obtained by state-of-the-art methods reported in the literature. Furthermore, we proposed a novel active learning framework that is able to query the labels of the most informative points which help in the classification of the unlabeled points. For the majority vote scheme we provided some guarantees on the number of points that should be drawn from each cluster in order to infer the correct label of the cluster with high probability. Moreover, in the ridge regression scheme we proposed an algorithm that in each step selects the most uncertain point in terms of the prediction function of the classier (the point that lies near the decision boundary of the classifier). In both cases, experimental results show the strength of our methods and confirm our theoretical guarantees. The results look very promising and open several interesting directions of future research. For the SSL scheme, it is interesting to test the performance of several other clustering approaches, such as k-means, standard spectral clustering, hierarchical clustering, e.t.c. and combine them together in one general method. Our intuition is that the algorithm should be able to select only the good clusterings that provide discriminative information for each specific problem. Apart from ridge regression, it would be beneficial to experiment with other fitting approaches that produce sparse representations in our constructed basis. For the active learning framework, one interesting direction is to further generalize it into more general clusterings that take into account the hierarchical structure of data. In that way, we will take advantage of the underlying hierarchy and by adaptively selecting the pruning of the cluster tree we can (potentially) further improve our sampling strategy. Additionally, we believe that in the multi-clustering scenario extensive improvements of our algorithm can be proposed in order to better take advantage of the variation in the multiple clustering representations of the data. Finally, as our methods scale to large-scale problems and partially labeled data occurs in many different areas ranging from web documents to protein data, there is room for many interesting applications of the proposed methods.
Export
BibTeX
@mastersthesis{Arvanitopoulos-Darginis2011, TITLE = {Aggregation of Multiple Clusterings and Active Learning in a Transductive Setting}, AUTHOR = {Arvanitopoulos-Darginis, Nikolaos}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {In this work we proposed a novel transductive method to solve the problem of learning from partially labeled data. Our main idea was to aggregate information obtained from several clusterings to infer the labels of the unlabeled data. While our method is not restricted to a specific clustering method, we chose to use in our experiments the normalized variant of 1-spectral clustering, which was demonstrated to produce in most cases better clusterings than the standard spectral clustering method. Our approach yielded results which were at least comparable to, and in some cases even significantly better than the best results obtained by state-of-the-art methods reported in the literature. Furthermore, we proposed a novel active learning framework that is able to query the labels of the most informative points which help in the classification of the unlabeled points. For the majority vote scheme we provided some guarantees on the number of points that should be drawn from each cluster in order to infer the correct label of the cluster with high probability. Moreover, in the ridge regression scheme we proposed an algorithm that in each step selects the most uncertain point in terms of the prediction function of the classier (the point that lies near the decision boundary of the classifier). In both cases, experimental results show the strength of our methods and confirm our theoretical guarantees. The results look very promising and open several interesting directions of future research. For the SSL scheme, it is interesting to test the performance of several other clustering approaches, such as k-means, standard spectral clustering, hierarchical clustering, e.t.c. and combine them together in one general method. Our intuition is that the algorithm should be able to select only the good clusterings that provide discriminative information for each specific problem. Apart from ridge regression, it would be beneficial to experiment with other fitting approaches that produce sparse representations in our constructed basis. For the active learning framework, one interesting direction is to further generalize it into more general clusterings that take into account the hierarchical structure of data. In that way, we will take advantage of the underlying hierarchy and by adaptively selecting the pruning of the cluster tree we can (potentially) further improve our sampling strategy. Additionally, we believe that in the multi-clustering scenario extensive improvements of our algorithm can be proposed in order to better take advantage of the variation in the multiple clustering representations of the data. Finally, as our methods scale to large-scale problems and partially labeled data occurs in many different areas ranging from web documents to protein data, there is room for many interesting applications of the proposed methods.}, }
Endnote
%0 Thesis %A Arvanitopoulos-Darginis, Nikolaos %Y Hein, Matthias %A referee: Weikert, Joachim %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Aggregation of Multiple Clusterings and Active Learning in a Transductive Setting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-CC8E-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X In this work we proposed a novel transductive method to solve the problem of learning from partially labeled data. Our main idea was to aggregate information obtained from several clusterings to infer the labels of the unlabeled data. While our method is not restricted to a specific clustering method, we chose to use in our experiments the normalized variant of 1-spectral clustering, which was demonstrated to produce in most cases better clusterings than the standard spectral clustering method. Our approach yielded results which were at least comparable to, and in some cases even significantly better than the best results obtained by state-of-the-art methods reported in the literature. Furthermore, we proposed a novel active learning framework that is able to query the labels of the most informative points which help in the classification of the unlabeled points. For the majority vote scheme we provided some guarantees on the number of points that should be drawn from each cluster in order to infer the correct label of the cluster with high probability. Moreover, in the ridge regression scheme we proposed an algorithm that in each step selects the most uncertain point in terms of the prediction function of the classier (the point that lies near the decision boundary of the classifier). In both cases, experimental results show the strength of our methods and confirm our theoretical guarantees. The results look very promising and open several interesting directions of future research. For the SSL scheme, it is interesting to test the performance of several other clustering approaches, such as k-means, standard spectral clustering, hierarchical clustering, e.t.c. and combine them together in one general method. Our intuition is that the algorithm should be able to select only the good clusterings that provide discriminative information for each specific problem. Apart from ridge regression, it would be beneficial to experiment with other fitting approaches that produce sparse representations in our constructed basis. For the active learning framework, one interesting direction is to further generalize it into more general clusterings that take into account the hierarchical structure of data. In that way, we will take advantage of the underlying hierarchy and by adaptively selecting the pruning of the cluster tree we can (potentially) further improve our sampling strategy. Additionally, we believe that in the multi-clustering scenario extensive improvements of our algorithm can be proposed in order to better take advantage of the variation in the multiple clustering representations of the data. Finally, as our methods scale to large-scale problems and partially labeled data occurs in many different areas ranging from web documents to protein data, there is room for many interesting applications of the proposed methods.
[43]
N. Azmy, “Formula Renaming with Generalizations,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@mastersthesis{Azmy12, TITLE = {Formula Renaming with Generalizations}, AUTHOR = {Azmy, Noran}, LANGUAGE = {eng}, LOCALID = {Local-ID: DF824D161A8C2600C1257AF6004FEBFF-Azmy12}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Azmy, Noran %Y Weidenbach, Christoph %A referee: Werner, Stephan %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Formula Renaming with Generalizations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-B40C-0 %F OTHER: Local-ID: DF824D161A8C2600C1257AF6004FEBFF-Azmy12 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master
[44]
E. Cergani, “Relation Extraction Using Matrix Factorization Methods,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@mastersthesis{Cergani2012, TITLE = {Relation Extraction Using Matrix Factorization Methods}, AUTHOR = {Cergani, Ervina}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-B2109FA6099C9CC8C1257AC900301865-Cergani2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Cergani, Ervina %Y Weikum, Gerhard %A referee: Miettinen, Pauli %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Relation Extraction Using Matrix Factorization Methods : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6277-9 %F EDOC: 647514 %F OTHER: Local-ID: C1256DBF005F876D-B2109FA6099C9CC8C1257AC900301865-Cergani2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master
[45]
C. Croitoru, “Algorithmic Aspects of Abstract Argumentation Frameworks,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@mastersthesis{CroitoruMaster2012, TITLE = {Algorithmic Aspects of Abstract Argumentation Frameworks}, AUTHOR = {Croitoru, Cosmina}, LANGUAGE = {eng}, LOCALID = {Local-ID: F5B5A180A18D2EEBC1257B1600392684-CroitoruMaster2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Croitoru, Cosmina %Y Mehlhorn, Kurt %A referee: K&#246;tzing, Timo %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Algorithmic Aspects of Abstract Argumentation Frameworks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-BAAD-0 %F OTHER: Local-ID: F5B5A180A18D2EEBC1257B1600392684-CroitoruMaster2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P X, 75 p. %V master %9 master
[46]
I. Goncharov, “Local Constancy Assumption Selection for Variational Optical Flow,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Variational methods are among the most successful approaches for computing high-quality optical ow. However, there are still many ways to improve. In this thesis we fist provide a general overview of the main ideas of existing approaches by the example of the complementary optical ow method of Zimmer et al. [19]. This serves us as a starting point for introducing the concept of automatic local selection of the most suitable constancy assumption on image features, which allows to further improve the quality of optical ow estimation. As a main contribution, we provide the variational formulation, that directly leads to the proposed behavior. The derived model is then analysed and evaluated in the series of experiments.
Export
BibTeX
@mastersthesis{Goncharov2011, TITLE = {Local Constancy Assumption Selection for Variational Optical Flow}, AUTHOR = {Goncharov, Ilya}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Variational methods are among the most successful approaches for computing high-quality optical ow. However, there are still many ways to improve. In this thesis we fist provide a general overview of the main ideas of existing approaches by the example of the complementary optical ow method of Zimmer et al. [19]. This serves us as a starting point for introducing the concept of automatic local selection of the most suitable constancy assumption on image features, which allows to further improve the quality of optical ow estimation. As a main contribution, we provide the variational formulation, that directly leads to the proposed behavior. The derived model is then analysed and evaluated in the series of experiments.}, }
Endnote
%0 Thesis %A Goncharov, Ilya %Y Bruhn, Andreas %A referee: Weikert, Joachim %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Local Constancy Assumption Selection for Variational Optical Flow : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0026-D083-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X Variational methods are among the most successful approaches for computing high-quality optical ow. However, there are still many ways to improve. In this thesis we fist provide a general overview of the main ideas of existing approaches by the example of the complementary optical ow method of Zimmer et al. [19]. This serves us as a starting point for introducing the concept of automatic local selection of the most suitable constancy assumption on image features, which allows to further improve the quality of optical ow estimation. As a main contribution, we provide the variational formulation, that directly leads to the proposed behavior. The derived model is then analysed and evaluated in the series of experiments.
[47]
H. Khoshnevis, “Discriminating 4G and Broadcast Signals via Cyclostationary Feature Detection,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
According to the FCC, spectrum allocation will be one of the problems of future telecommunication systems. Indeed, the available parts of the spectrum have been assigned statically to some applications such as mobile networks and broadcasting systems; hence finding a proper operating band for new systems is difficult. These telecommunication systems are called primary users. However, primary users do not always use their entire bandwidth, and therefore a lot of spectrum holes can be detected. These spectrum holes can be utilized for undefined systems called secondary users. Federal communication commission (FCC) introduced cognitive radio which detects these holes and assigns them to secondary users. There are several techniques for detention of signals such as energy based detection, matched filter detection and cyclostationary based detection. Cyclostationary based detection as one of the most sensitive methods, can be used for detection and classification of different systems. However, traditional multi-cycle and single-cycle detectors suffer from high complexity. Fortunately, using some prior knowledge about the signal, this shortcoming can be solved. In this thesis, signals of DVB-T2 as a broadcasting system and 3GPP LTE and IEEE 802.16 (WiMAX) as mobile networks have been evaluated and two cyclostationary based algorithms for detection and classification of these signals in SISO and MIMO antenna configurations are proposed.
Export
BibTeX
@mastersthesis{Khoshnevis2012, TITLE = {Discriminating {4G} and Broadcast Signals via Cyclostationary Feature Detection}, AUTHOR = {Khoshnevis, Hossein}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {According to the FCC, spectrum allocation will be one of the problems of future telecommunication systems. Indeed, the available parts of the spectrum have been assigned statically to some applications such as mobile networks and broadcasting systems; hence finding a proper operating band for new systems is difficult. These telecommunication systems are called primary users. However, primary users do not always use their entire bandwidth, and therefore a lot of spectrum holes can be detected. These spectrum holes can be utilized for undefined systems called secondary users. Federal communication commission (FCC) introduced cognitive radio which detects these holes and assigns them to secondary users. There are several techniques for detention of signals such as energy based detection, matched filter detection and cyclostationary based detection. Cyclostationary based detection as one of the most sensitive methods, can be used for detection and classification of different systems. However, traditional multi-cycle and single-cycle detectors suffer from high complexity. Fortunately, using some prior knowledge about the signal, this shortcoming can be solved. In this thesis, signals of DVB-T2 as a broadcasting system and 3GPP LTE and IEEE 802.16 (WiMAX) as mobile networks have been evaluated and two cyclostationary based algorithms for detection and classification of these signals in SISO and MIMO antenna configurations are proposed.}, }
Endnote
%0 Thesis %A Khoshnevis, Hossein %Y Herfet, Thorsten %A referee: Schiele, Bernt %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Discriminating 4G and Broadcast Signals via Cyclostationary Feature Detection : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9F11-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X According to the FCC, spectrum allocation will be one of the problems of future telecommunication systems. Indeed, the available parts of the spectrum have been assigned statically to some applications such as mobile networks and broadcasting systems; hence finding a proper operating band for new systems is difficult. These telecommunication systems are called primary users. However, primary users do not always use their entire bandwidth, and therefore a lot of spectrum holes can be detected. These spectrum holes can be utilized for undefined systems called secondary users. Federal communication commission (FCC) introduced cognitive radio which detects these holes and assigns them to secondary users. There are several techniques for detention of signals such as energy based detection, matched filter detection and cyclostationary based detection. Cyclostationary based detection as one of the most sensitive methods, can be used for detection and classification of different systems. However, traditional multi-cycle and single-cycle detectors suffer from high complexity. Fortunately, using some prior knowledge about the signal, this shortcoming can be solved. In this thesis, signals of DVB-T2 as a broadcasting system and 3GPP LTE and IEEE 802.16 (WiMAX) as mobile networks have been evaluated and two cyclostationary based algorithms for detection and classification of these signals in SISO and MIMO antenna configurations are proposed.
[48]
S. Moran, “Shattering Extremal Systems,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@mastersthesis{MoranMaster2012, TITLE = {Shattering Extremal Systems}, AUTHOR = {Moran, Shay}, LANGUAGE = {eng}, LOCALID = {Local-ID: BF24A25446C99102C1257B1600386F5D-MoranMaster2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Moran, Shay %Y Mehlhorn, Kurt %A referee: Litman, Ami %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Shattering Extremal Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-BAA6-D %F OTHER: Local-ID: BF24A25446C99102C1257B1600386F5D-MoranMaster2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %P II, 39 p. %V master %9 master
[49]
D. B. Nguyen, “Efficient Entity Disambiguation via Similarity Hashing,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@mastersthesis{Nguyen2012, TITLE = {Efficient Entity Disambiguation via Similarity Hashing}, AUTHOR = {Nguyen, Dat Ba}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-86BEFB1566020C4AC1257A6400543D05-Nguyen2012}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Nguyen, Dat Ba %Y Theobald, Martin %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient Entity Disambiguation via Similarity Hashing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-626B-5 %F EDOC: 647513 %F OTHER: Local-ID: C1256DBF005F876D-86BEFB1566020C4AC1257A6400543D05-Nguyen2012 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master
[50]
J. Parks, “Detecting Structural Regularity in Perspective Images,” Universität des Saarlandes, Saarbrücken, 2012.
Export
BibTeX
@mastersthesis{MasterParks, TITLE = {Detecting Structural Regularity in Perspective Images}, AUTHOR = {Parks, Justin}, LANGUAGE = {eng}, LOCALID = {Local-ID: 45164}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, }
Endnote
%0 Thesis %A Parks, Justin %Y Thorm&#228;hlen, Thorsten %A referee: Lasowski, Ruxandra %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Detecting Structural Regularity in Perspective Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-F425-9 %F OTHER: Local-ID: 45164 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master
[51]
B. Saliba, “An Evaluation Method For Indoor Positioning Systems On The Example Of LORIOT,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
In this thesis an evaluation method on indoor positioning system called LORIOT is presented. This positioning system combines two technologies (RFID and IR) for positioning depending on Geo-referenced dynamic bayesian networks. LORIOT allows the users to calculate their position on their own device without sending any data to a server responsible for calculating the position [3]. This property provides less complexity and fast calculation. This positioning method is developed by placing the tags in the environment and letting the user carry the sensors that are used to read data from these tags. The user is then able to choose either to pass the positioning data to any third party application or not. The main focus here is to check the actual accuracy and performance of indoor positioning systems using the proposed evaluation method which is tested on LORIOT. Most of the evaluation methods that have been used to test the level of accuracy of indoor positioning systems are biased and not good enough. For instance, the system is tested under optimal conditions of the environment. To achieve this goal, the evaluation method will be used to test LORIOT in a natural environment and by using data of natural traces of people walking in the environment without giving them any task to do. This type of evaluation criteria improves the results because the system would be installed in an environment which has the same properties that the environment has in this study, (where the evaluation tests are done). In addition, the system will position people while walking naturally (unlike most evaluation methods which test indoor positioning systems not while walking).
Export
BibTeX
@mastersthesis{Saliba2012, TITLE = {An Evaluation Method For Indoor Positioning Systems On The Example Of {LORIOT}}, AUTHOR = {Saliba, Bahjat}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {In this thesis an evaluation method on indoor positioning system called LORIOT is presented. This positioning system combines two technologies (RFID and IR) for positioning depending on Geo-referenced dynamic bayesian networks. LORIOT allows the users to calculate their position on their own device without sending any data to a server responsible for calculating the position [3]. This property provides less complexity and fast calculation. This positioning method is developed by placing the tags in the environment and letting the user carry the sensors that are used to read data from these tags. The user is then able to choose either to pass the positioning data to any third party application or not. The main focus here is to check the actual accuracy and performance of indoor positioning systems using the proposed evaluation method which is tested on LORIOT. Most of the evaluation methods that have been used to test the level of accuracy of indoor positioning systems are biased and not good enough. For instance, the system is tested under optimal conditions of the environment. To achieve this goal, the evaluation method will be used to test LORIOT in a natural environment and by using data of natural traces of people walking in the environment without giving them any task to do. This type of evaluation criteria improves the results because the system would be installed in an environment which has the same properties that the environment has in this study, (where the evaluation tests are done). In addition, the system will position people while walking naturally (unlike most evaluation methods which test indoor positioning systems not while walking).}, }
Endnote
%0 Thesis %A Saliba, Bahjat %Y Wahlster, Wolfgang %A referee: M&#252;ller, Christian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T An Evaluation Method For Indoor Positioning Systems On The Example Of LORIOT : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-9F69-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X In this thesis an evaluation method on indoor positioning system called LORIOT is presented. This positioning system combines two technologies (RFID and IR) for positioning depending on Geo-referenced dynamic bayesian networks. LORIOT allows the users to calculate their position on their own device without sending any data to a server responsible for calculating the position [3]. This property provides less complexity and fast calculation. This positioning method is developed by placing the tags in the environment and letting the user carry the sensors that are used to read data from these tags. The user is then able to choose either to pass the positioning data to any third party application or not. The main focus here is to check the actual accuracy and performance of indoor positioning systems using the proposed evaluation method which is tested on LORIOT. Most of the evaluation methods that have been used to test the level of accuracy of indoor positioning systems are biased and not good enough. For instance, the system is tested under optimal conditions of the environment. To achieve this goal, the evaluation method will be used to test LORIOT in a natural environment and by using data of natural traces of people walking in the environment without giving them any task to do. This type of evaluation criteria improves the results because the system would be installed in an environment which has the same properties that the environment has in this study, (where the evaluation tests are done). In addition, the system will position people while walking naturally (unlike most evaluation methods which test indoor positioning systems not while walking).
[52]
L. Teris, “Securing User-data in Android A conceptual approach for consumer and enterprise usage,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Nowadays, smartphones and tablets are replacing the personal computer for the average user. As more activities move to these gadgets, so does the sensitive data with which they operate. However, there are few data protection mechanisms for the mobile world at the moment, especially for scenarios where the attacker has full access to the device (e.g. when the device is lost or stolen). In this thesis, we tackle this problem and propose a novel encryption system for Android, the top-selling mobile operating system. Our investigation of the Android platform leads to a set of observations that motivate our effort. Firstly, the existing defense mechanisms are too weak or too rigid in terms of access control and granularity of the secured data unit. Secondly, Android can be corrupted such that the default encryption solution will reveal sensitive content via the debug interface. In response, we design and (partially) implement an encryption system that addresses these shortcomings and operates in a manner that is transparent to the user. Also, by leveraging hardware security mechanisms, our system offers security guarantees even when running on a corrupted OS. Moreover, the system is conceptually designed to operate in an enterprise environment where mobile devices are administered by a central authority. Finally, we provide a prototypical implementation and evaluate our system to show the practicality of our approach.
Export
BibTeX
@mastersthesis{Teris2012, TITLE = {Securing User-data in Android A conceptual approach for consumer and enterprise usage}, AUTHOR = {Teris, Liviu}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Nowadays, smartphones and tablets are replacing the personal computer for the average user. As more activities move to these gadgets, so does the sensitive data with which they operate. However, there are few data protection mechanisms for the mobile world at the moment, especially for scenarios where the attacker has full access to the device (e.g. when the device is lost or stolen). In this thesis, we tackle this problem and propose a novel encryption system for Android, the top-selling mobile operating system. Our investigation of the Android platform leads to a set of observations that motivate our effort. Firstly, the existing defense mechanisms are too weak or too rigid in terms of access control and granularity of the secured data unit. Secondly, Android can be corrupted such that the default encryption solution will reveal sensitive content via the debug interface. In response, we design and (partially) implement an encryption system that addresses these shortcomings and operates in a manner that is transparent to the user. Also, by leveraging hardware security mechanisms, our system offers security guarantees even when running on a corrupted OS. Moreover, the system is conceptually designed to operate in an enterprise environment where mobile devices are administered by a central authority. Finally, we provide a prototypical implementation and evaluate our system to show the practicality of our approach.}, }
Endnote
%0 Thesis %A Teris, Liviu %Y Backes, Michael %A referee: Hammer, Christian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Securing User-data in Android A conceptual approach for consumer and enterprise usage : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A17C-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X Nowadays, smartphones and tablets are replacing the personal computer for the average user. As more activities move to these gadgets, so does the sensitive data with which they operate. However, there are few data protection mechanisms for the mobile world at the moment, especially for scenarios where the attacker has full access to the device (e.g. when the device is lost or stolen). In this thesis, we tackle this problem and propose a novel encryption system for Android, the top-selling mobile operating system. Our investigation of the Android platform leads to a set of observations that motivate our effort. Firstly, the existing defense mechanisms are too weak or too rigid in terms of access control and granularity of the secured data unit. Secondly, Android can be corrupted such that the default encryption solution will reveal sensitive content via the debug interface. In response, we design and (partially) implement an encryption system that addresses these shortcomings and operates in a manner that is transparent to the user. Also, by leveraging hardware security mechanisms, our system offers security guarantees even when running on a corrupted OS. Moreover, the system is conceptually designed to operate in an enterprise environment where mobile devices are administered by a central authority. Finally, we provide a prototypical implementation and evaluate our system to show the practicality of our approach.
[53]
M. Venkatachalapathy, “Scheduling Strategies in a Main-Memory MapReduce Framework,Approach for countering Reduce side skew,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Over the past few decades, there is a multifold increase in the amount of digital data that is being generated. Various attempts are being made to process this vast amount of data in a fast and efficient manner. Hadoop - MapReduce is one such software framework that has gained popularity in the last few years. It provides a reliable and easier way to process huge amount of data in-parallel on large computing cluster. However, Hadoop always persists intermediate results to the local disk. As a result, Hadoop usually suffers from long execution runtimes as it typically pays a high I/O cost for running jobs. The state-of-the-art computing clusters have enough main memory capacity to hold terabytes of data in main memory. We have built M3R (Main Memory MapReduce) framework, a prototype for generic main memory-based data processing. M3R can execute MapReduce jobs and also in addition it can execute general data processing jobs. This master thesis in particular, focuses on countering the data-skewness problem for MapReduce jobs on M3R. Intermediate data following skewed distribution could lead to computational imbalance amongst the reduce tasks, resulting in longer MapReduce job execution times. This provides a scope for rebalancing the intermediate data and thereby reducing the total job runtimes. We propose a novel dynamic approach of data rebalancing, to counter the reducer side data skewness. Our proposed on-the-fly skew countering approach, attempts to detect the level of skewness in the intermediate data and rebalances the intermediate data amongst the reduce tasks. The proposed mechanism performs all the skew-countering related activities during the execution of actual MapReduce job. We have implemented this reduce side skew countering mechanism as a part of the M3R framework. The experiments conducted to study the behavior of this M3R data-rebalancing approach shows there is a significant reduction in the map-reduce job runtimes. In case of the data-skewed input, our proposed skew-control approach for M3R has reduced the total map-reduce job runtime (up to 31 ) when compared to M3R without skew-control.
Export
BibTeX
@mastersthesis{Venkatachalapathy2012, TITLE = {Scheduling Strategies in a Main-Memory {MapReduce} Framework,Approach for countering Reduce side skew}, AUTHOR = {Venkatachalapathy, Mahendiran}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Over the past few decades, there is a multifold increase in the amount of digital data that is being generated. Various attempts are being made to process this vast amount of data in a fast and efficient manner. Hadoop -- MapReduce is one such software framework that has gained popularity in the last few years. It provides a reliable and easier way to process huge amount of data in-parallel on large computing cluster. However, Hadoop always persists intermediate results to the local disk. As a result, Hadoop usually suffers from long execution runtimes as it typically pays a high I/O cost for running jobs. The state-of-the-art computing clusters have enough main memory capacity to hold terabytes of data in main memory. We have built M3R (Main Memory MapReduce) framework, a prototype for generic main memory-based data processing. M3R can execute MapReduce jobs and also in addition it can execute general data processing jobs. This master thesis in particular, focuses on countering the data-skewness problem for MapReduce jobs on M3R. Intermediate data following skewed distribution could lead to computational imbalance amongst the reduce tasks, resulting in longer MapReduce job execution times. This provides a scope for rebalancing the intermediate data and thereby reducing the total job runtimes. We propose a novel dynamic approach of data rebalancing, to counter the reducer side data skewness. Our proposed on-the-fly skew countering approach, attempts to detect the level of skewness in the intermediate data and rebalances the intermediate data amongst the reduce tasks. The proposed mechanism performs all the skew-countering related activities during the execution of actual MapReduce job. We have implemented this reduce side skew countering mechanism as a part of the M3R framework. The experiments conducted to study the behavior of this M3R data-rebalancing approach shows there is a significant reduction in the map-reduce job runtimes. In case of the data-skewed input, our proposed skew-control approach for M3R has reduced the total map-reduce job runtime (up to 31 ) when compared to M3R without skew-control.}, }
Endnote
%0 Thesis %A Venkatachalapathy, Mahendiran %Y Dittrich, Jens %A referee: Quian&#233;-Ruiz, Jorge-Arnulfo %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Scheduling Strategies in a Main-Memory MapReduce Framework,Approach for countering Reduce side skew : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A183-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X Over the past few decades, there is a multifold increase in the amount of digital data that is being generated. Various attempts are being made to process this vast amount of data in a fast and efficient manner. Hadoop - MapReduce is one such software framework that has gained popularity in the last few years. It provides a reliable and easier way to process huge amount of data in-parallel on large computing cluster. However, Hadoop always persists intermediate results to the local disk. As a result, Hadoop usually suffers from long execution runtimes as it typically pays a high I/O cost for running jobs. The state-of-the-art computing clusters have enough main memory capacity to hold terabytes of data in main memory. We have built M3R (Main Memory MapReduce) framework, a prototype for generic main memory-based data processing. M3R can execute MapReduce jobs and also in addition it can execute general data processing jobs. This master thesis in particular, focuses on countering the data-skewness problem for MapReduce jobs on M3R. Intermediate data following skewed distribution could lead to computational imbalance amongst the reduce tasks, resulting in longer MapReduce job execution times. This provides a scope for rebalancing the intermediate data and thereby reducing the total job runtimes. We propose a novel dynamic approach of data rebalancing, to counter the reducer side data skewness. Our proposed on-the-fly skew countering approach, attempts to detect the level of skewness in the intermediate data and rebalances the intermediate data amongst the reduce tasks. The proposed mechanism performs all the skew-countering related activities during the execution of actual MapReduce job. We have implemented this reduce side skew countering mechanism as a part of the M3R framework. The experiments conducted to study the behavior of this M3R data-rebalancing approach shows there is a significant reduction in the map-reduce job runtimes. In case of the data-skewed input, our proposed skew-control approach for M3R has reduced the total map-reduce job runtime (up to 31 ) when compared to M3R without skew-control.
[54]
Q. Zheng, “Sparse Dictionary Learning with Simplex Constraints and Application to Topic Modeling,” Universität des Saarlandes, Saarbrücken, 2012.
Abstract
Probabilistic mixture model is a powerful tool to provide a low-dimensional representation of count data. In the context of topic modeling, this amounts to representing the distribution of one document as a mixture of multiple distributions known as topics. The mixing proportions are called coecients. A common attempt is to introduce sparsity into both the topics and the coecients for better interpretability. We first discuss the problem of recovering sparse coecients of given documents when the topics are known. This is formulated as a penalized least squares problem on the probability simplex, where the sparsity is achieved through regularization. However, the typical `1 regularizer becomes toothless in this case since it is constant over the simplex. To overcome this issue, we propose a group of concave penalties for inducing sparsity. An alternative approach is to post-process the solution of non-negative lasso to produce result that conform to the simplex constraint. Our experiments show that both kinds of approaches can effiectively recover the sparsity pattern of coefficients. We then elaborately compare their robustness for dierent characteristics of input data. The second problem we discuss is to model both the topics and the coefficients of a collection of documents via matrix factorization. We propose the LpT approach, in which all the topics and coefficients are constrained on the simplex, and the `p penalty is imposed on each topic to promote sparsity. We also consider procedures that post-process the solutions of other methods. For example, the L1 approach first solves the problem where the simplex constraints imposed on the topics are relaxed into the non-negativity constraints, and the `p penalty is the replaced by the `1 penalty. Afterwards, L1 normalize the estimated topics to generate results satisfying the simplex constraints. As detecting the number of mixture components inherent in the data is of central importance for the probabilistic mixture model, we analyze how the regularization techniques can help us to automatically find out this number. We compare the capabilities of these approaches to recover the low-rank structure underlying the data, when the number of topics are correctly specied and over-specified, respectively. The empirical results demonstrate that LpT and L1 can discover the sparsity pattern of the ground truth. In addition, when the number of topics is over-specied, they adapt to the true number of topics.
Export
BibTeX
@mastersthesis{Zheng2012, TITLE = {Sparse Dictionary Learning with Simplex Constraints and Application to Topic Modeling}, AUTHOR = {Zheng, Qinqing}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2012}, DATE = {2012}, ABSTRACT = {Probabilistic mixture model is a powerful tool to provide a low-dimensional representation of count data. In the context of topic modeling, this amounts to representing the distribution of one document as a mixture of multiple distributions known as topics. The mixing proportions are called coecients. A common attempt is to introduce sparsity into both the topics and the coecients for better interpretability. We first discuss the problem of recovering sparse coecients of given documents when the topics are known. This is formulated as a penalized least squares problem on the probability simplex, where the sparsity is achieved through regularization. However, the typical `1 regularizer becomes toothless in this case since it is constant over the simplex. To overcome this issue, we propose a group of concave penalties for inducing sparsity. An alternative approach is to post-process the solution of non-negative lasso to produce result that conform to the simplex constraint. Our experiments show that both kinds of approaches can effiectively recover the sparsity pattern of coefficients. We then elaborately compare their robustness for dierent characteristics of input data. The second problem we discuss is to model both the topics and the coefficients of a collection of documents via matrix factorization. We propose the LpT approach, in which all the topics and coefficients are constrained on the simplex, and the `p penalty is imposed on each topic to promote sparsity. We also consider procedures that post-process the solutions of other methods. For example, the L1 approach first solves the problem where the simplex constraints imposed on the topics are relaxed into the non-negativity constraints, and the `p penalty is the replaced by the `1 penalty. Afterwards, L1 normalize the estimated topics to generate results satisfying the simplex constraints. As detecting the number of mixture components inherent in the data is of central importance for the probabilistic mixture model, we analyze how the regularization techniques can help us to automatically find out this number. We compare the capabilities of these approaches to recover the low-rank structure underlying the data, when the number of topics are correctly specied and over-specified, respectively. The empirical results demonstrate that LpT and L1 can discover the sparsity pattern of the ground truth. In addition, when the number of topics is over-specied, they adapt to the true number of topics.}, }
Endnote
%0 Thesis %A Zheng, Qinqing %Y Hein, Matthias %A referee: Slawski, Martin %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Sparse Dictionary Learning with Simplex Constraints and Application to Topic Modeling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A192-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2012 %V master %9 master %X Probabilistic mixture model is a powerful tool to provide a low-dimensional representation of count data. In the context of topic modeling, this amounts to representing the distribution of one document as a mixture of multiple distributions known as topics. The mixing proportions are called coecients. A common attempt is to introduce sparsity into both the topics and the coecients for better interpretability. We first discuss the problem of recovering sparse coecients of given documents when the topics are known. This is formulated as a penalized least squares problem on the probability simplex, where the sparsity is achieved through regularization. However, the typical `1 regularizer becomes toothless in this case since it is constant over the simplex. To overcome this issue, we propose a group of concave penalties for inducing sparsity. An alternative approach is to post-process the solution of non-negative lasso to produce result that conform to the simplex constraint. Our experiments show that both kinds of approaches can effiectively recover the sparsity pattern of coefficients. We then elaborately compare their robustness for dierent characteristics of input data. The second problem we discuss is to model both the topics and the coefficients of a collection of documents via matrix factorization. We propose the LpT approach, in which all the topics and coefficients are constrained on the simplex, and the `p penalty is imposed on each topic to promote sparsity. We also consider procedures that post-process the solutions of other methods. For example, the L1 approach first solves the problem where the simplex constraints imposed on the topics are relaxed into the non-negativity constraints, and the `p penalty is the replaced by the `1 penalty. Afterwards, L1 normalize the estimated topics to generate results satisfying the simplex constraints. As detecting the number of mixture components inherent in the data is of central importance for the probabilistic mixture model, we analyze how the regularization techniques can help us to automatically find out this number. We compare the capabilities of these approaches to recover the low-rank structure underlying the data, when the number of topics are correctly specied and over-specified, respectively. The empirical results demonstrate that LpT and L1 can discover the sparsity pattern of the ground truth. In addition, when the number of topics is over-specied, they adapt to the true number of topics.
2011
[55]
F. Abed, “Coordination Mechanisms for Unrelated Machine Scheduling,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
We investigate load balancing games in the context of unrelated machines scheduling. In such a game, there are a number of jobs and a number of machines, and each job needs to be scheduled on one machine. A collection of values pij are given, where pij indicates the processing time of job i on machine j. Moreover, each job is controlled by a selfish player who only wants to minimize the completion time of his job while disregarding other players' welfare. The outcome schedule is a Nash equilibrium if no player can unilaterally change his machine and reduce the completion time of his job. It is known that in an equilibrium, the performance of the system can be far from optimal. The degradation of the system performance in Nash equilibrium is defined as the price of anarchy (PoA): the ratio of the cost of the worst Nash equilibrium to the cost of the optimal scheduling. Clever scheduling policies can be designed to reduce PoA. These scheduling policies are called coordination mechanisms. It has been posed as an open question "what is the best possible lower bound when coordination mechanisms use preemption". In this thesis we prove a lower bound of ( logm log logm) for all symmetric preemptive coordination mechanisms. Moreover we study the lower bound for the unusual case when the coordination mechanisms are asymmetric and we get the same bound under the weak assumption that machines have no IDs. On the positive side we prove that the inefficiency-based mechanism can achieve a constant PoA when the maximum inefficiency of the jobs is bounded by a constant.
Export
BibTeX
@mastersthesis{Abed2011, TITLE = {Coordination Mechanisms for Unrelated Machine Scheduling}, AUTHOR = {Abed, Fidaa}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {We investigate load balancing games in the context of unrelated machines scheduling. In such a game, there are a number of jobs and a number of machines, and each job needs to be scheduled on one machine. A collection of values pij are given, where pij indicates the processing time of job i on machine j. Moreover, each job is controlled by a selfish player who only wants to minimize the completion time of his job while disregarding other players' welfare. The outcome schedule is a Nash equilibrium if no player can unilaterally change his machine and reduce the completion time of his job. It is known that in an equilibrium, the performance of the system can be far from optimal. The degradation of the system performance in Nash equilibrium is defined as the price of anarchy (PoA): the ratio of the cost of the worst Nash equilibrium to the cost of the optimal scheduling. Clever scheduling policies can be designed to reduce PoA. These scheduling policies are called coordination mechanisms. It has been posed as an open question "what is the best possible lower bound when coordination mechanisms use preemption". In this thesis we prove a lower bound of ( logm log logm) for all symmetric preemptive coordination mechanisms. Moreover we study the lower bound for the unusual case when the coordination mechanisms are asymmetric and we get the same bound under the weak assumption that machines have no IDs. On the positive side we prove that the inefficiency-based mechanism can achieve a constant PoA when the maximum inefficiency of the jobs is bounded by a constant.}, }
Endnote
%0 Thesis %A Abed, Fidaa %Y Mehlhorn, Kurt %A referee: Huang, Chien-Chung %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Coordination Mechanisms for Unrelated Machine Scheduling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A1A2-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master %X We investigate load balancing games in the context of unrelated machines scheduling. In such a game, there are a number of jobs and a number of machines, and each job needs to be scheduled on one machine. A collection of values pij are given, where pij indicates the processing time of job i on machine j. Moreover, each job is controlled by a selfish player who only wants to minimize the completion time of his job while disregarding other players' welfare. The outcome schedule is a Nash equilibrium if no player can unilaterally change his machine and reduce the completion time of his job. It is known that in an equilibrium, the performance of the system can be far from optimal. The degradation of the system performance in Nash equilibrium is defined as the price of anarchy (PoA): the ratio of the cost of the worst Nash equilibrium to the cost of the optimal scheduling. Clever scheduling policies can be designed to reduce PoA. These scheduling policies are called coordination mechanisms. It has been posed as an open question "what is the best possible lower bound when coordination mechanisms use preemption". In this thesis we prove a lower bound of ( logm log logm) for all symmetric preemptive coordination mechanisms. Moreover we study the lower bound for the unusual case when the coordination mechanisms are asymmetric and we get the same bound under the weak assumption that machines have no IDs. On the positive side we prove that the inefficiency-based mechanism can achieve a constant PoA when the maximum inefficiency of the jobs is bounded by a constant.
[56]
I. L. Ciolacu, “Universally Composable Relativistic Commitments,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
Designing communications protocols specifically adapted to relativistic situations (i.e. constrained by special relativity theory) is taking advantage of uniquely relativistic features to accomplish otherwise impossible tasks. Kent [Ken99] has demonstrated, for example, that secure bit commitment is possible using a protocol exploiting relativistic causality constraints, even though it is known to be impossible otherwise. Therefore, Kent's protocol gives a theoretical solution to the problem of finding commitment schemes secure over arbitrarily long time intervals. The functionality only requires from the committer a sequence of communications, including a post-revelation validation, each of which is guaranteed to be independent of its predecessor. We propose to verify the security of the relativistic commitment not as a stand alone protocol, but as an entity which is part of an unpredictable environment. To achieve this task we use the universal composability paradigm defined by Canetti [Can01]. The relevant property of the paradigm is the guarantee of security even when a secure protocol is composed with an arbitrary set of protocols, or, more generally, when the protocol is used as an element of a possibly complex system. Unfortunately, Kent's relativistic bit commitment satisfies universal composability only with certain restrictions on the adversarial model. However, we construct a two-party universal composable commitment protocol, also based on general relativistic assumptions.
Export
BibTeX
@mastersthesis{Ciolacu2011, TITLE = {Universally Composable Relativistic Commitments}, AUTHOR = {Ciolacu, Ines Lucia}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011-05}, ABSTRACT = {Designing communications protocols specifically adapted to relativistic situations (i.e. constrained by special relativity theory) is taking advantage of uniquely relativistic features to accomplish otherwise impossible tasks. Kent [Ken99] has demonstrated, for example, that secure bit commitment is possible using a protocol exploiting relativistic causality constraints, even though it is known to be impossible otherwise. Therefore, Kent's protocol gives a theoretical solution to the problem of finding commitment schemes secure over arbitrarily long time intervals. The functionality only requires from the committer a sequence of communications, including a post-revelation validation, each of which is guaranteed to be independent of its predecessor. We propose to verify the security of the relativistic commitment not as a stand alone protocol, but as an entity which is part of an unpredictable environment. To achieve this task we use the universal composability paradigm defined by Canetti [Can01]. The relevant property of the paradigm is the guarantee of security even when a secure protocol is composed with an arbitrary set of protocols, or, more generally, when the protocol is used as an element of a possibly complex system. Unfortunately, Kent's relativistic bit commitment satisfies universal composability only with certain restrictions on the adversarial model. However, we construct a two-party universal composable commitment protocol, also based on general relativistic assumptions.}, }
Endnote
%0 Thesis %A Ciolacu, Ines Lucia %Y Unruh, Dominique %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Universally Composable Relativistic Commitments : %U http://hdl.handle.net/11858/00-001M-0000-0027-A1B6-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master %X Designing communications protocols specifically adapted to relativistic situations (i.e. constrained by special relativity theory) is taking advantage of uniquely relativistic features to accomplish otherwise impossible tasks. Kent [Ken99] has demonstrated, for example, that secure bit commitment is possible using a protocol exploiting relativistic causality constraints, even though it is known to be impossible otherwise. Therefore, Kent's protocol gives a theoretical solution to the problem of finding commitment schemes secure over arbitrarily long time intervals. The functionality only requires from the committer a sequence of communications, including a post-revelation validation, each of which is guaranteed to be independent of its predecessor. We propose to verify the security of the relativistic commitment not as a stand alone protocol, but as an entity which is part of an unpredictable environment. To achieve this task we use the universal composability paradigm defined by Canetti [Can01]. The relevant property of the paradigm is the guarantee of security even when a secure protocol is composed with an arbitrary set of protocols, or, more generally, when the protocol is used as an element of a possibly complex system. Unfortunately, Kent's relativistic bit commitment satisfies universal composability only with certain restrictions on the adversarial model. However, we construct a two-party universal composable commitment protocol, also based on general relativistic assumptions.
[57]
M. Ebrahimi, “Solving Linear Programs in MapReduce,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@mastersthesis{Ebrahimi2011, TITLE = {Solving Linear Programs in {M}ap{R}educe}, AUTHOR = {Ebrahimi, Mahdi}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-24D5DC0DEA1F99D1C1257903003A46A3-Ebrahimi2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Ebrahimi, Mahdi %Y Weikum, Gerhard %A referee: Gemulla, Rainer %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Solving Linear Programs in MapReduce : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-14C4-E %F EDOC: 618981 %F OTHER: Local-ID: C1256DBF005F876D-24D5DC0DEA1F99D1C1257903003A46A3-Ebrahimi2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master
[58]
J. Iqbal, “Lineage Enabled Query Answering in Uncertain Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
We present a unified framework for query answering over uncertain RDF knowledge bases. Specifically, our proposed design combines correlated base facts with a query driven, top down deductive grounding phase of first-order logic formulas (i.e., Horn rules) followed by a probabilistic inference phase. In addition to static input correlations among base facts, we employ the lineage structure obtained from processing the rules during grounding phase, in order to trace the logical dependencies of query answers (i.e., derived facts) back to the base facts. Thus, correlations (or more precisely: dependencies) among facts in a knowledge base may arise from two sources: 1) static input dependencies obtained from real-world observations; and 2) dynamic dependencies induced at query time by the rule-based lineage structure of the query answer. Our implementation employs state-of-the-art inference techniques: We apply exact inference whenever tractable, the detection of shared factors, shrink- age of Boolean formula when feasible, and Gibbs sampling in the general case. Our experiments are conducted on real data sets with synthetic expansion of correlated base facts. The experimental evaluation demonstrates the practical viability and scalability of our approach, achieving interactive query response times over a very large knowledge base. The experimental results provide the success guarantee of our presented framework.
Export
BibTeX
@mastersthesis{Iqbal2011, TITLE = {Lineage Enabled Query Answering in Uncertain Knowledge Bases}, AUTHOR = {Iqbal, Javeria}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {We present a unified framework for query answering over uncertain RDF knowledge bases. Specifically, our proposed design combines correlated base facts with a query driven, top down deductive grounding phase of first-order logic formulas (i.e., Horn rules) followed by a probabilistic inference phase. In addition to static input correlations among base facts, we employ the lineage structure obtained from processing the rules during grounding phase, in order to trace the logical dependencies of query answers (i.e., derived facts) back to the base facts. Thus, correlations (or more precisely: dependencies) among facts in a knowledge base may arise from two sources: 1) static input dependencies obtained from real-world observations; and 2) dynamic dependencies induced at query time by the rule-based lineage structure of the query answer. Our implementation employs state-of-the-art inference techniques: We apply exact inference whenever tractable, the detection of shared factors, shrink- age of Boolean formula when feasible, and Gibbs sampling in the general case. Our experiments are conducted on real data sets with synthetic expansion of correlated base facts. The experimental evaluation demonstrates the practical viability and scalability of our approach, achieving interactive query response times over a very large knowledge base. The experimental results provide the success guarantee of our presented framework.}, }
Endnote
%0 Thesis %A Iqbal, Javeria %Y Theobald, Martin %A referee: Michel, Sebastian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Lineage Enabled Query Answering in Uncertain Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A1EA-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master %X We present a unified framework for query answering over uncertain RDF knowledge bases. Specifically, our proposed design combines correlated base facts with a query driven, top down deductive grounding phase of first-order logic formulas (i.e., Horn rules) followed by a probabilistic inference phase. In addition to static input correlations among base facts, we employ the lineage structure obtained from processing the rules during grounding phase, in order to trace the logical dependencies of query answers (i.e., derived facts) back to the base facts. Thus, correlations (or more precisely: dependencies) among facts in a knowledge base may arise from two sources: 1) static input dependencies obtained from real-world observations; and 2) dynamic dependencies induced at query time by the rule-based lineage structure of the query answer. Our implementation employs state-of-the-art inference techniques: We apply exact inference whenever tractable, the detection of shared factors, shrink- age of Boolean formula when feasible, and Gibbs sampling in the general case. Our experiments are conducted on real data sets with synthetic expansion of correlated base facts. The experimental evaluation demonstrates the practical viability and scalability of our approach, achieving interactive query response times over a very large knowledge base. The experimental results provide the success guarantee of our presented framework.
[59]
V. N. Ivanova, “Comparison of Methods for the Discovery of Copy Number Aberrations Relevant to Cancer,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
Recurrent genomic amplications and deletions characterize cancer genomes and contribute to disease evolution. Array Comparative Genomic Hybridization (aCGH) technology allows detection of chromosomal copy number aberrations in the genomic DNA of tumors with high resolution. The association of consistent copy number aberrations with particular types of cancer facilitates the understanding of the pathogenesis of the disease, and contributes towards the improvement of diagnosis, prognosis and the development of drugs. However, distinguishing aberrations that are relevant to cancer from random background aberrations is a dicult task, due to the high dimensionality of the aCGH data. Different statistical methods have been developed to identify non-random gains and losses across multiple samples. Their approaches vary in several aspects: requirements necessary for an aberration to be recurrent, preprocessing of the input data, statistical approaches used for assessing signicance of a recurrent aberration and other biological considerations they use. So far, multiple-sample analysis methods have only been evaluated qualitatively and their relative merits remain unknown. In this work we propose an approach for quantitative evaluation of the performance of four selected methods. We use simulated data with known aberrations to validate each method and we interpret the different outcomes. We also compare the performance of the methods on a collection of neuroblastoma tumors by quantifying the agreement between methods. We select appropriate techniques to combine the outputs of the methods into a meaningful aggregation in order to obtain a high condence lists of signicant copy number aberrations.
Export
BibTeX
@mastersthesis{Ivanova2011, TITLE = {Comparison of Methods for the Discovery of Copy Number Aberrations Relevant to Cancer}, AUTHOR = {Ivanova, Violeta Nikolaeva}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Recurrent genomic amplications and deletions characterize cancer genomes and contribute to disease evolution. Array Comparative Genomic Hybridization (aCGH) technology allows detection of chromosomal copy number aberrations in the genomic DNA of tumors with high resolution. The association of consistent copy number aberrations with particular types of cancer facilitates the understanding of the pathogenesis of the disease, and contributes towards the improvement of diagnosis, prognosis and the development of drugs. However, distinguishing aberrations that are relevant to cancer from random background aberrations is a dicult task, due to the high dimensionality of the aCGH data. Different statistical methods have been developed to identify non-random gains and losses across multiple samples. Their approaches vary in several aspects: requirements necessary for an aberration to be recurrent, preprocessing of the input data, statistical approaches used for assessing signicance of a recurrent aberration and other biological considerations they use. So far, multiple-sample analysis methods have only been evaluated qualitatively and their relative merits remain unknown. In this work we propose an approach for quantitative evaluation of the performance of four selected methods. We use simulated data with known aberrations to validate each method and we interpret the different outcomes. We also compare the performance of the methods on a collection of neuroblastoma tumors by quantifying the agreement between methods. We select appropriate techniques to combine the outputs of the methods into a meaningful aggregation in order to obtain a high condence lists of signicant copy number aberrations.}, }
Endnote
%0 Thesis %A Ivanova, Violeta Nikolaeva %Y Lengauer, Thomas %A referee: Lenhof, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Comparison of Methods for the Discovery of Copy Number Aberrations Relevant to Cancer : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A1F0-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P 128 p. %V master %9 master %X Recurrent genomic amplications and deletions characterize cancer genomes and contribute to disease evolution. Array Comparative Genomic Hybridization (aCGH) technology allows detection of chromosomal copy number aberrations in the genomic DNA of tumors with high resolution. The association of consistent copy number aberrations with particular types of cancer facilitates the understanding of the pathogenesis of the disease, and contributes towards the improvement of diagnosis, prognosis and the development of drugs. However, distinguishing aberrations that are relevant to cancer from random background aberrations is a dicult task, due to the high dimensionality of the aCGH data. Different statistical methods have been developed to identify non-random gains and losses across multiple samples. Their approaches vary in several aspects: requirements necessary for an aberration to be recurrent, preprocessing of the input data, statistical approaches used for assessing signicance of a recurrent aberration and other biological considerations they use. So far, multiple-sample analysis methods have only been evaluated qualitatively and their relative merits remain unknown. In this work we propose an approach for quantitative evaluation of the performance of four selected methods. We use simulated data with known aberrations to validate each method and we interpret the different outcomes. We also compare the performance of the methods on a collection of neuroblastoma tumors by quantifying the agreement between methods. We select appropriate techniques to combine the outputs of the methods into a meaningful aggregation in order to obtain a high condence lists of signicant copy number aberrations.
[60]
Y. Kargin, “Distributed Analytics over Web Archives,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@mastersthesis{KarginMaster2011, TITLE = {Distributed Analytics over Web Archives}, AUTHOR = {Kargin, Yagiz}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-E222B52EB02061B7C125783F0041FA53-KarginMaster2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Kargin, Yagiz %Y Bedathur, Srikanta %A referee: Anand, Avishek %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Distributed Analytics over Web Archives : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1447-A %F EDOC: 618943 %F OTHER: Local-ID: C1256DBF005F876D-E222B52EB02061B7C125783F0041FA53-KarginMaster2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master
[61]
E. Kuzey, “Extraction of Temporal Facts and Events from Wikipedia,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@mastersthesis{Kuzey2011, TITLE = {Extraction of Temporal Facts and Events from Wikipedia}, AUTHOR = {Kuzey, Erdal}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-E87CB338D81CD766C12578CC003C2F6E-Kuzey2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Kuzey, Erdal %Y Weikum, Gerhard %A referee: Theobald, Martin %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Extraction of Temporal Facts and Events from Wikipedia : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1455-A %F EDOC: 618966 %F OTHER: Local-ID: C1256DBF005F876D-E87CB338D81CD766C12578CC003C2F6E-Kuzey2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master
[62]
D. Mahmoud, “Multiple-frame Image Super Resolution Based on Optic Flow,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
Super resolution is the task of reconstructing one or several high resolution images, from one or several low resolution images. A variety of super resolution methods have been proposed over the past three decades, some following a singleframe based methodology while the others utilizing a multiple-frame based one. These methods are usually very sensitive to their underlying model of data and noise, which limits their performance. In this thesis, we propose and compare two multiple-frame based approaches that address such shortcomings. In the rst proposal we investigate a fast, local approach which combines the low resolution frames via warping and then performs diusion-based inpainting. The second proposal models the image formation process in a variational framework with regularization that is robust to errors in motion and blur estimation. In addition, we introduce a brightness adaptation step which results in images with sharper edges. An accurate estimation of optical ow among the low resolution measurements is a fundamental step towards high quality super resolution for both methods. Experiments conrm the eectiveness of our method on a variety of super resolution benchmark sequences, as well as its superiority in performance to other closely-related methods.
Export
BibTeX
@mastersthesis{Mahmoud2011, TITLE = {Multiple-frame Image Super Resolution Based on Optic Flow}, AUTHOR = {Mahmoud, Dina}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Super resolution is the task of reconstructing one or several high resolution images, from one or several low resolution images. A variety of super resolution methods have been proposed over the past three decades, some following a singleframe based methodology while the others utilizing a multiple-frame based one. These methods are usually very sensitive to their underlying model of data and noise, which limits their performance. In this thesis, we propose and compare two multiple-frame based approaches that address such shortcomings. In the rst proposal we investigate a fast, local approach which combines the low resolution frames via warping and then performs diusion-based inpainting. The second proposal models the image formation process in a variational framework with regularization that is robust to errors in motion and blur estimation. In addition, we introduce a brightness adaptation step which results in images with sharper edges. An accurate estimation of optical ow among the low resolution measurements is a fundamental step towards high quality super resolution for both methods. Experiments conrm the eectiveness of our method on a variety of super resolution benchmark sequences, as well as its superiority in performance to other closely-related methods.}, }
Endnote
%0 Thesis %A Mahmoud, Dina %Y Bruhn, Andres %A referee: Weikert, Joachim %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Multiple-frame Image Super Resolution Based on Optic Flow : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-A7F2-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master %X Super resolution is the task of reconstructing one or several high resolution images, from one or several low resolution images. A variety of super resolution methods have been proposed over the past three decades, some following a singleframe based methodology while the others utilizing a multiple-frame based one. These methods are usually very sensitive to their underlying model of data and noise, which limits their performance. In this thesis, we propose and compare two multiple-frame based approaches that address such shortcomings. In the rst proposal we investigate a fast, local approach which combines the low resolution frames via warping and then performs diusion-based inpainting. The second proposal models the image formation process in a variational framework with regularization that is robust to errors in motion and blur estimation. In addition, we introduce a brightness adaptation step which results in images with sharper edges. An accurate estimation of optical ow among the low resolution measurements is a fundamental step towards high quality super resolution for both methods. Experiments conrm the eectiveness of our method on a variety of super resolution benchmark sequences, as well as its superiority in performance to other closely-related methods.
[63]
M. Malinowski, “Optimization Algorithms in the Reconstruction of MR Images: A Comparative Study,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
Time that an imaging device needs to produce results is one of the most crucial factors in medical imaging. Shorter scanning duration causes fewer artifacts such as those created by the patient motion. In addition, it increases patient comfort and in the case of some imaging modalities also decreases exposure to radiation. There are some possibilities, hardware-based or software-based, to improve the imaging speed. One way is to speed up the scanning process by acquiring fewer measurements. A recently developed mathematical framework called compressed sensing shows that it is possible to accurately recover undersampled images provided a suitable measurement matrix is used and the image itself has a sparse representation. Nevertheless, not only measurements are important but also good reconstruction models are required. Such models are usually expressed as optimization problems. In this thesis, we concentrated on the reconstruction of the undersampled Magnetic Resonance (MR) images. For this purpose a complex-valued reconstruction model was provided. Since the reconstruction should be as quick as possible, fast methods to find the solution for the reconstruction problem are required. To meet this objective, three popular algorithms FISTA, Augmented Lagrangian and Non-linear Conjugate Gradient were adopted to work with our model. By changing the complex-valued reconstruction model slightly and dualizing the problem, we obtained an instance of the quadratically constrained quadratic program where both the objective function and the constraints are twice differentiable. Hence new model opened doors to two other methods, the first order method which resembles FISTA and is called in this thesis Normed Constrained Quadratic FGP, and the second order method called Truncated Newton Primal Dual Interior Point. Next, in order to compare performance of the methods, we set up the experiments and evaluated all presented methods against the problem of reconstructing undersampled MR images. In the experiments we used a number of invocations of the Fourier transform to measure the performance of all algorithms. As a result of the experiments we found that in the context of the original model the performance of Augmented Lagrangian is better than the other two methods. Performance of Non-linear Conjugate Gradient and FISTA are about the same. In the context of the extended model Normed Constrained Quadratic FGP beats the Truncated Newton Primal Dual Interior Point method.
Export
BibTeX
@mastersthesis{Malinowski2011, TITLE = {Optimization Algorithms in the Reconstruction of {MR} Images: A Comparative Study}, AUTHOR = {Malinowski, Mateusz}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12576EE0048963A-A807E45B9B37277EC12579A3003A3E73-Malinowski2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {Time that an imaging device needs to produce results is one of the most crucial factors in medical imaging. Shorter scanning duration causes fewer artifacts such as those created by the patient motion. In addition, it increases patient comfort and in the case of some imaging modalities also decreases exposure to radiation. There are some possibilities, hardware-based or software-based, to improve the imaging speed. One way is to speed up the scanning process by acquiring fewer measurements. A recently developed mathematical framework called compressed sensing shows that it is possible to accurately recover undersampled images provided a suitable measurement matrix is used and the image itself has a sparse representation. Nevertheless, not only measurements are important but also good reconstruction models are required. Such models are usually expressed as optimization problems. In this thesis, we concentrated on the reconstruction of the undersampled Magnetic Resonance (MR) images. For this purpose a complex-valued reconstruction model was provided. Since the reconstruction should be as quick as possible, fast methods to find the solution for the reconstruction problem are required. To meet this objective, three popular algorithms FISTA, Augmented Lagrangian and Non-linear Conjugate Gradient were adopted to work with our model. By changing the complex-valued reconstruction model slightly and dualizing the problem, we obtained an instance of the quadratically constrained quadratic program where both the objective function and the constraints are twice differentiable. Hence new model opened doors to two other methods, the first order method which resembles FISTA and is called in this thesis Normed Constrained Quadratic FGP, and the second order method called Truncated Newton Primal Dual Interior Point. Next, in order to compare performance of the methods, we set up the experiments and evaluated all presented methods against the problem of reconstructing undersampled MR images. In the experiments we used a number of invocations of the Fourier transform to measure the performance of all algorithms. As a result of the experiments we found that in the context of the original model the performance of Augmented Lagrangian is better than the other two methods. Performance of Non-linear Conjugate Gradient and FISTA are about the same. In the context of the extended model Normed Constrained Quadratic FGP beats the Truncated Newton Primal Dual Interior Point method.}, }
Endnote
%0 Thesis %A Malinowski, Mateusz %Y Seeger, Matthias %A referee: Hein,, Matthias %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Optimization Algorithms in the Reconstruction of MR Images: A Comparative Study : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11B0-3 %F EDOC: 618778 %F OTHER: Local-ID: C12576EE0048963A-A807E45B9B37277EC12579A3003A3E73-Malinowski2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master %X Time that an imaging device needs to produce results is one of the most crucial factors in medical imaging. Shorter scanning duration causes fewer artifacts such as those created by the patient motion. In addition, it increases patient comfort and in the case of some imaging modalities also decreases exposure to radiation. There are some possibilities, hardware-based or software-based, to improve the imaging speed. One way is to speed up the scanning process by acquiring fewer measurements. A recently developed mathematical framework called compressed sensing shows that it is possible to accurately recover undersampled images provided a suitable measurement matrix is used and the image itself has a sparse representation. Nevertheless, not only measurements are important but also good reconstruction models are required. Such models are usually expressed as optimization problems. In this thesis, we concentrated on the reconstruction of the undersampled Magnetic Resonance (MR) images. For this purpose a complex-valued reconstruction model was provided. Since the reconstruction should be as quick as possible, fast methods to find the solution for the reconstruction problem are required. To meet this objective, three popular algorithms FISTA, Augmented Lagrangian and Non-linear Conjugate Gradient were adopted to work with our model. By changing the complex-valued reconstruction model slightly and dualizing the problem, we obtained an instance of the quadratically constrained quadratic program where both the objective function and the constraints are twice differentiable. Hence new model opened doors to two other methods, the first order method which resembles FISTA and is called in this thesis Normed Constrained Quadratic FGP, and the second order method called Truncated Newton Primal Dual Interior Point. Next, in order to compare performance of the methods, we set up the experiments and evaluated all presented methods against the problem of reconstructing undersampled MR images. In the experiments we used a number of invocations of the Fourier transform to measure the performance of all algorithms. As a result of the experiments we found that in the context of the original model the performance of Augmented Lagrangian is better than the other two methods. Performance of Non-linear Conjugate Gradient and FISTA are about the same. In the context of the extended model Normed Constrained Quadratic FGP beats the Truncated Newton Primal Dual Interior Point method.
[64]
N. Prytkova, “Modeling and Evaluation of Co-Evolution in Collective Web Memories,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
The constantly evolving Web reects the evolution of society in the cyberspace. Projects like the Open Directory Project (dmoz.org) can be understood as a collective memory of society on the Web. The main assumption is that such collective Web memories evolve when a certain cognition level about a concept has been exceeded. In the scope of our work we analyse the New York Times archive for concept detection. There are several approaches to the concept modelling. We introduce an alternative model for concepts, which does not make any additional assumptions about types of contained entities or the number of entities in the corpus. Moreover, the proposed distributed concept computation algorithm enables the large scale archive analysis. We also introduce a model of cognition level and explain how it can be employed to predict changes in the category system of DMOZ.
Export
BibTeX
@mastersthesis{Prytkova2011, TITLE = {Modeling and Evaluation of Co-Evolution in Collective Web Memories}, AUTHOR = {Prytkova, Natalia}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-680CF1F2F3D8F339C1257957003777B9-Prytkova2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {The constantly evolving Web reects the evolution of society in the cyberspace. Projects like the Open Directory Project (dmoz.org) can be understood as a collective memory of society on the Web. The main assumption is that such collective Web memories evolve when a certain cognition level about a concept has been exceeded. In the scope of our work we analyse the New York Times archive for concept detection. There are several approaches to the concept modelling. We introduce an alternative model for concepts, which does not make any additional assumptions about types of contained entities or the number of entities in the corpus. Moreover, the proposed distributed concept computation algorithm enables the large scale archive analysis. We also introduce a model of cognition level and explain how it can be employed to predict changes in the category system of DMOZ.}, }
Endnote
%0 Thesis %A Prytkova, Natalia %Y Weikum, Gerhard %A referee: Spaniol, Marc %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Modeling and Evaluation of Co-Evolution in Collective Web Memories : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1493-D %F EDOC: 618987 %F OTHER: Local-ID: C1256DBF005F876D-680CF1F2F3D8F339C1257957003777B9-Prytkova2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master %X The constantly evolving Web reects the evolution of society in the cyberspace. Projects like the Open Directory Project (dmoz.org) can be understood as a collective memory of society on the Web. The main assumption is that such collective Web memories evolve when a certain cognition level about a concept has been exceeded. In the scope of our work we analyse the New York Times archive for concept detection. There are several approaches to the concept modelling. We introduce an alternative model for concepts, which does not make any additional assumptions about types of contained entities or the number of entities in the corpus. Moreover, the proposed distributed concept computation algorithm enables the large scale archive analysis. We also introduce a model of cognition level and explain how it can be employed to predict changes in the category system of DMOZ.
[65]
T. Samar, “Scalable Distributed Time-Travel Text Search,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@mastersthesis{Samar2011, TITLE = {Scalable Distributed Time-Travel Text Search}, AUTHOR = {Samar, Thaer}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-A12913F267F0DA6FC12579650034300D-Samar2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Samar, Thaer %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Scalable Distributed Time-Travel Text Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-14B7-C %F EDOC: 618990 %F OTHER: Local-ID: C1256DBF005F876D-A12913F267F0DA6FC12579650034300D-Samar2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master
[66]
M. Simonovsky, “Hand Shape Recognition Using a ToF Camera : An Application to Sign Language,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
This master's thesis investigates the benefit of utilizing depth information acquired by a time-of-flight (ToF) camera for hand shape recognition from unrestricted viewpoints. Specifically, we assess the hypothesis that classical 3D content descriptors might be inappropriate for ToF depth images due to the 2.5D nature and noisiness of the data and possible expensive computations in 3D space. Instead, we extend 2D descriptors to make use of the additional semantics of depth images. Our system is based on the appearance-based retrieval paradigm, using a synthetic 3D hand model to generate its database. The system is able to run at interactive frame rates. For increased robustness, no color, intensity, or time coherence information is used. A novel, domain-specific algorithm for segmenting the forearm from the upper body based on reprojecting the acquired geometry into the lateral view is introduced. Moreover, three kinds of descriptors exploiting depth data are proposed and the made design choices are experimentally supported. The whole system is then evaluated on an American sign language fingerspelling dataset. However, the retrieval performance still leaves room for improvements. Several insights and possible reasons are discussed.
Export
BibTeX
@mastersthesis{Simonovsky2010, TITLE = {Hand Shape Recognition Using a {ToF} Camera : An Application to Sign Language}, AUTHOR = {Simonovsky, Martin}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-F6F3ECA002CC444FC1257970006B4EF3-Simonovsky2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {This master's thesis investigates the benefit of utilizing depth information acquired by a time-of-flight (ToF) camera for hand shape recognition from unrestricted viewpoints. Specifically, we assess the hypothesis that classical 3D content descriptors might be inappropriate for ToF depth images due to the 2.5D nature and noisiness of the data and possible expensive computations in 3D space. Instead, we extend 2D descriptors to make use of the additional semantics of depth images. Our system is based on the appearance-based retrieval paradigm, using a synthetic 3D hand model to generate its database. The system is able to run at interactive frame rates. For increased robustness, no color, intensity, or time coherence information is used. A novel, domain-specific algorithm for segmenting the forearm from the upper body based on reprojecting the acquired geometry into the lateral view is introduced. Moreover, three kinds of descriptors exploiting depth data are proposed and the made design choices are experimentally supported. The whole system is then evaluated on an American sign language fingerspelling dataset. However, the retrieval performance still leaves room for improvements. Several insights and possible reasons are discussed.}, }
Endnote
%0 Thesis %A Simonovsky, Martin %Y Theobalt, Christian %A referee: M&#252;ller, Meinard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T Hand Shape Recognition Using a ToF Camera : An Application to Sign Language : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-11B9-2 %F EDOC: 618894 %F OTHER: Local-ID: C125675300671F7B-F6F3ECA002CC444FC1257970006B4EF3-Simonovsky2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P III, 64 p. %V master %9 master %X This master's thesis investigates the benefit of utilizing depth information acquired by a time-of-flight (ToF) camera for hand shape recognition from unrestricted viewpoints. Specifically, we assess the hypothesis that classical 3D content descriptors might be inappropriate for ToF depth images due to the 2.5D nature and noisiness of the data and possible expensive computations in 3D space. Instead, we extend 2D descriptors to make use of the additional semantics of depth images. Our system is based on the appearance-based retrieval paradigm, using a synthetic 3D hand model to generate its database. The system is able to run at interactive frame rates. For increased robustness, no color, intensity, or time coherence information is used. A novel, domain-specific algorithm for segmenting the forearm from the upper body based on reprojecting the acquired geometry into the lateral view is introduced. Moreover, three kinds of descriptors exploiting depth data are proposed and the made design choices are experimentally supported. The whole system is then evaluated on an American sign language fingerspelling dataset. However, the retrieval performance still leaves room for improvements. Several insights and possible reasons are discussed.
[67]
N. Tandon, “Deriving a Web-scale Common Sense Fact Knowledge Base,” Universität des Saarlandes, Saarbrücken, 2011.
Abstract
The fact that birds have feathers and ice is cold seems trivially true. Yet, most machine-readable sources of knowledge either lack such common sense facts entirely or have only limited coverage. Prior work on automated knowledge base construction has largely focused on relations between named entities and on taxonomic knowledge, while disregarding common sense properties. Extracting such structured data from text is challenging, especially due to the scarcity of explicitly expressed knowledge. Even when relying on large document collections, patternbased information extraction approaches typically discover insufficient amounts of information. This thesis investigates harvesting massive amounts of common sense knowledge using the textual knowledge of the entire Web, yet staying away from the massive engineering efforts in procuring such a large corpus as a Web. Despite the advancements in knowledge harvesting, we observed that the state of the art methods were limited in terms of accuracy and discovered insufficient amounts of information under our desired setting. This thesis shows how to gather large amounts of common sense facts from Web N-gram data, using seeds from the existing knowledge bases like ConceptNet. Our novel contributions include scalable methods for tapping onto Web-scale data and a new scoring model to determine which patterns and facts are most reliable. An extensive experimental evaluation is provided for three different binary relations, comparing different sources of n-gram data as well as different algorithms. The experimental results show that this approach extends ConceptNet by many orders of magnitude (more than 200-fold) at comparable levels of precision.
Export
BibTeX
@mastersthesis{MasterTandon2011, TITLE = {Deriving a Web-scale Common Sense Fact Knowledge Base}, AUTHOR = {Tandon, Niket}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, ABSTRACT = {The fact that birds have feathers and ice is cold seems trivially true. Yet, most machine-readable sources of knowledge either lack such common sense facts entirely or have only limited coverage. Prior work on automated knowledge base construction has largely focused on relations between named entities and on taxonomic knowledge, while disregarding common sense properties. Extracting such structured data from text is challenging, especially due to the scarcity of explicitly expressed knowledge. Even when relying on large document collections, patternbased information extraction approaches typically discover insufficient amounts of information. This thesis investigates harvesting massive amounts of common sense knowledge using the textual knowledge of the entire Web, yet staying away from the massive engineering efforts in procuring such a large corpus as a Web. Despite the advancements in knowledge harvesting, we observed that the state of the art methods were limited in terms of accuracy and discovered insufficient amounts of information under our desired setting. This thesis shows how to gather large amounts of common sense facts from Web N-gram data, using seeds from the existing knowledge bases like ConceptNet. Our novel contributions include scalable methods for tapping onto Web-scale data and a new scoring model to determine which patterns and facts are most reliable. An extensive experimental evaluation is provided for three different binary relations, comparing different sources of n-gram data as well as different algorithms. The experimental results show that this approach extends ConceptNet by many orders of magnitude (more than 200-fold) at comparable levels of precision.}, }
Endnote
%0 Thesis %A Tandon, Niket %Y Weikum, Gerhard %A referee: Theobalt, Christian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Deriving a Web-scale Common Sense Fact Knowledge Base : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-ABF9-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P X, 81 p. %V master %9 master %X The fact that birds have feathers and ice is cold seems trivially true. Yet, most machine-readable sources of knowledge either lack such common sense facts entirely or have only limited coverage. Prior work on automated knowledge base construction has largely focused on relations between named entities and on taxonomic knowledge, while disregarding common sense properties. Extracting such structured data from text is challenging, especially due to the scarcity of explicitly expressed knowledge. Even when relying on large document collections, patternbased information extraction approaches typically discover insufficient amounts of information. This thesis investigates harvesting massive amounts of common sense knowledge using the textual knowledge of the entire Web, yet staying away from the massive engineering efforts in procuring such a large corpus as a Web. Despite the advancements in knowledge harvesting, we observed that the state of the art methods were limited in terms of accuracy and discovered insufficient amounts of information under our desired setting. This thesis shows how to gather large amounts of common sense facts from Web N-gram data, using seeds from the existing knowledge bases like ConceptNet. Our novel contributions include scalable methods for tapping onto Web-scale data and a new scoring model to determine which patterns and facts are most reliable. An extensive experimental evaluation is provided for three different binary relations, comparing different sources of n-gram data as well as different algorithms. The experimental results show that this approach extends ConceptNet by many orders of magnitude (more than 200-fold) at comparable levels of precision.
[68]
C. Teflioudi, “Learning Soft Inference Rules in Large and Uncertain Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@mastersthesis{Teflioudi2011, TITLE = {Learning Soft Inference Rules in Large and Uncertain Knowledge Bases}, AUTHOR = {Teflioudi, Christina}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-8C6053D027747A15C1257850004BFB15-Teflioudi2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Teflioudi, Christina %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Learning Soft Inference Rules in Large and Uncertain Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1486-B %F EDOC: 618950 %F OTHER: Local-ID: C1256DBF005F876D-8C6053D027747A15C1257850004BFB15-Teflioudi2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master
[69]
A. T. Tran, “Context-aware timeline for entity exploration,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@mastersthesis{Tran2011, TITLE = {Context-aware timeline for entity exploration}, AUTHOR = {Tran, Anh Tuan}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-3DE40C1C4F51E0FBC1257853004E31B4-Tran2011}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Tran, Anh Tuan %Y Preda, Nicoleta %A referee: Elbassuoni, Shady %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Context-aware timeline for entity exploration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-1433-5 %F EDOC: 618952 %F OTHER: Local-ID: C1256DBF005F876D-3DE40C1C4F51E0FBC1257853004E31B4-Tran2011 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %V master %9 master
[70]
M. R. Yousefi, “Generating Detailed Face Models by Controlled Lighting,” Universität des Saarlandes, Saarbrücken, 2011.
Export
BibTeX
@mastersthesis{MasterThesisYousefiMohammad, TITLE = {Generating Detailed Face Models by Controlled Lighting}, AUTHOR = {Yousefi, Mohammad Reza}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2011}, DATE = {2011}, }
Endnote
%0 Thesis %A Yousefi, Mohammad Reza %A referee: Seidel, Hans-Peter %Y Thorm&#228;hlen, Thorsten %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Generating Detailed Face Models by Controlled Lighting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0010-13C1-B %F EDOC: 618921 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2011 %P XVI, 39 p. %V master %9 master
2010
[71]
A. Andreychenko, “Uniformization for Time-Inhomogeneous Markov Population Models,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Time is one of the main factors in any kind of real-life systems. When a certain system is analysed one is often interested in its evolution with respect to time. Various phenomena can be described using a form of time-dependency. The difference between load in call-centres is the example of time-dependency in queueing systems. The process of migration of biological species in autumn and spring is another illustration of changing the behaviour in time. The ageing process in critical infrastructures (which can result in the system component failure) can also be considered as another type of time-dependent evolution. Considering the variability in time for chemical and biological systems one comes to the general tasks of systems biology [9]. It is an inter-disciplinary study field which investigates complex interactions between components of biological systems and aims to explore the fundamental laws and new features of them. Systems biology is also used for referring to a certain type of research cycle. It starts with the creation a model. One tries to describe the behaviour in a most intuitive and informative way which assumes convenience and visibility of future analysis. The traditional approach is based on deterministic models where the evolution can be predicted with certainty. This type of model usually operates at a macroscopic scale and if one considers chemical reactions the state of the system is represented by the concentrations of species and a continuous deterministic change is assumed. A set of ordinary differential equations (ODE) is one of the ways to describe such kind of models. To obtain a solution numerical methods are applied. The choice of a certain ODE-solver depends on the type of the ODE system. Another option is a full description of the chemical reaction system where we model each single molecule explicitly operating with their properties and positions in space. Naturally it is difficult to treat big systems in a such way and it also creates restrictions for computational analysis. However it reveals that the deterministic formalism is not always sufficient to describe all possible ways for the system to evolve. For instance, the Lambda phage decision circuit [1] can be a motivational example of such system. When the lambda phage virus infects the E.coli bacterium it can evolve in two different ways. The first one is lysogeny where the genome of the virus is integrated into the genome of the bacterium. Virus DNA is then replicated in descendant cells using the replication mechanism of the host cell. Another way is entering the lytic cycle, which means that new phages are synthesized directly in the host cell and finally its membrane is destroyed and new phages are released. A deterministic model is not appropriate to describe this process of choosing between two pathways as this decision is probabilistic and one needs a stochastic model to give an appropriate description. Another important issue which has to be addressed is the fact that the state of the system changes discretely. It means that one considers not the continuous change of chemical species concentrations but discrete events occuring with different probabilities (they can be time-dependent as well). We will use the continuous-time Markov Population Models (MPMs) formalism in this thesis to describe discrete-state stochastic systems. They are indeed continuous- 1 time Markov processes, where the state of the system represents populations and it is expressed by the vector of natural numbers. Such systems can have innitely many states. For the case of chemical reactions network it results in the fact that one can not provide strict upper bounds for the population of certain species. When analysing these systems one can estimate measures of interest (like expectation and variance for the certain species populations at a given time instant). Besides this, probabilities for certain events to occur can be important (for instance, the probability for population to reach the threshold or the probability for given species to extinct). The usual way to investigate properties of these systems is simulation [8] which means that a large amount of possible sample trajectories are generated and then analysed. However it can be difficult to collect a sufficient number of trajectories to provide statistical estimations of good quality. Besides the simulation, approaches based on the uniformization technique have been proven to be computationally efficient for analysis of time-independent MPMs. In the case of time-dependent processes only few results concerning the performance of numerical techniques are known [2]. Here we present a method for conducting an analysis of MPMs that can have possibly infinitely many states and their dynamics is time-dependent. To cope with the problem we combine the ideas of on-the-y uniformization [5] with the method for treating timeinhomogeneous behaviour presented by Bucholz.
Export
BibTeX
@mastersthesis{Andreychenko2010, TITLE = {Uniformization for Time-Inhomogeneous {M}arkov Population Models}, AUTHOR = {Andreychenko, Aleksandr}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Time is one of the main factors in any kind of real-life systems. When a certain system is analysed one is often interested in its evolution with respect to time. Various phenomena can be described using a form of time-dependency. The difference between load in call-centres is the example of time-dependency in queueing systems. The process of migration of biological species in autumn and spring is another illustration of changing the behaviour in time. The ageing process in critical infrastructures (which can result in the system component failure) can also be considered as another type of time-dependent evolution. Considering the variability in time for chemical and biological systems one comes to the general tasks of systems biology [9]. It is an inter-disciplinary study field which investigates complex interactions between components of biological systems and aims to explore the fundamental laws and new features of them. Systems biology is also used for referring to a certain type of research cycle. It starts with the creation a model. One tries to describe the behaviour in a most intuitive and informative way which assumes convenience and visibility of future analysis. The traditional approach is based on deterministic models where the evolution can be predicted with certainty. This type of model usually operates at a macroscopic scale and if one considers chemical reactions the state of the system is represented by the concentrations of species and a continuous deterministic change is assumed. A set of ordinary differential equations (ODE) is one of the ways to describe such kind of models. To obtain a solution numerical methods are applied. The choice of a certain ODE-solver depends on the type of the ODE system. Another option is a full description of the chemical reaction system where we model each single molecule explicitly operating with their properties and positions in space. Naturally it is difficult to treat big systems in a such way and it also creates restrictions for computational analysis. However it reveals that the deterministic formalism is not always sufficient to describe all possible ways for the system to evolve. For instance, the Lambda phage decision circuit [1] can be a motivational example of such system. When the lambda phage virus infects the E.coli bacterium it can evolve in two different ways. The first one is lysogeny where the genome of the virus is integrated into the genome of the bacterium. Virus DNA is then replicated in descendant cells using the replication mechanism of the host cell. Another way is entering the lytic cycle, which means that new phages are synthesized directly in the host cell and finally its membrane is destroyed and new phages are released. A deterministic model is not appropriate to describe this process of choosing between two pathways as this decision is probabilistic and one needs a stochastic model to give an appropriate description. Another important issue which has to be addressed is the fact that the state of the system changes discretely. It means that one considers not the continuous change of chemical species concentrations but discrete events occuring with different probabilities (they can be time-dependent as well). We will use the continuous-time Markov Population Models (MPMs) formalism in this thesis to describe discrete-state stochastic systems. They are indeed continuous- 1 time Markov processes, where the state of the system represents populations and it is expressed by the vector of natural numbers. Such systems can have innitely many states. For the case of chemical reactions network it results in the fact that one can not provide strict upper bounds for the population of certain species. When analysing these systems one can estimate measures of interest (like expectation and variance for the certain species populations at a given time instant). Besides this, probabilities for certain events to occur can be important (for instance, the probability for population to reach the threshold or the probability for given species to extinct). The usual way to investigate properties of these systems is simulation [8] which means that a large amount of possible sample trajectories are generated and then analysed. However it can be difficult to collect a sufficient number of trajectories to provide statistical estimations of good quality. Besides the simulation, approaches based on the uniformization technique have been proven to be computationally efficient for analysis of time-independent MPMs. In the case of time-dependent processes only few results concerning the performance of numerical techniques are known [2]. Here we present a method for conducting an analysis of MPMs that can have possibly infinitely many states and their dynamics is time-dependent. To cope with the problem we combine the ideas of on-the-y uniformization [5] with the method for treating timeinhomogeneous behaviour presented by Bucholz.}, }
Endnote
%0 Thesis %A Andreychenko, Aleksandr %Y Hermanns, Holger %A referee: Wolf, Verena %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Uniformization for Time-Inhomogeneous Markov Population Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-AF8D-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V master %9 master %X Time is one of the main factors in any kind of real-life systems. When a certain system is analysed one is often interested in its evolution with respect to time. Various phenomena can be described using a form of time-dependency. The difference between load in call-centres is the example of time-dependency in queueing systems. The process of migration of biological species in autumn and spring is another illustration of changing the behaviour in time. The ageing process in critical infrastructures (which can result in the system component failure) can also be considered as another type of time-dependent evolution. Considering the variability in time for chemical and biological systems one comes to the general tasks of systems biology [9]. It is an inter-disciplinary study field which investigates complex interactions between components of biological systems and aims to explore the fundamental laws and new features of them. Systems biology is also used for referring to a certain type of research cycle. It starts with the creation a model. One tries to describe the behaviour in a most intuitive and informative way which assumes convenience and visibility of future analysis. The traditional approach is based on deterministic models where the evolution can be predicted with certainty. This type of model usually operates at a macroscopic scale and if one considers chemical reactions the state of the system is represented by the concentrations of species and a continuous deterministic change is assumed. A set of ordinary differential equations (ODE) is one of the ways to describe such kind of models. To obtain a solution numerical methods are applied. The choice of a certain ODE-solver depends on the type of the ODE system. Another option is a full description of the chemical reaction system where we model each single molecule explicitly operating with their properties and positions in space. Naturally it is difficult to treat big systems in a such way and it also creates restrictions for computational analysis. However it reveals that the deterministic formalism is not always sufficient to describe all possible ways for the system to evolve. For instance, the Lambda phage decision circuit [1] can be a motivational example of such system. When the lambda phage virus infects the E.coli bacterium it can evolve in two different ways. The first one is lysogeny where the genome of the virus is integrated into the genome of the bacterium. Virus DNA is then replicated in descendant cells using the replication mechanism of the host cell. Another way is entering the lytic cycle, which means that new phages are synthesized directly in the host cell and finally its membrane is destroyed and new phages are released. A deterministic model is not appropriate to describe this process of choosing between two pathways as this decision is probabilistic and one needs a stochastic model to give an appropriate description. Another important issue which has to be addressed is the fact that the state of the system changes discretely. It means that one considers not the continuous change of chemical species concentrations but discrete events occuring with different probabilities (they can be time-dependent as well). We will use the continuous-time Markov Population Models (MPMs) formalism in this thesis to describe discrete-state stochastic systems. They are indeed continuous- 1 time Markov processes, where the state of the system represents populations and it is expressed by the vector of natural numbers. Such systems can have innitely many states. For the case of chemical reactions network it results in the fact that one can not provide strict upper bounds for the population of certain species. When analysing these systems one can estimate measures of interest (like expectation and variance for the certain species populations at a given time instant). Besides this, probabilities for certain events to occur can be important (for instance, the probability for population to reach the threshold or the probability for given species to extinct). The usual way to investigate properties of these systems is simulation [8] which means that a large amount of possible sample trajectories are generated and then analysed. However it can be difficult to collect a sufficient number of trajectories to provide statistical estimations of good quality. Besides the simulation, approaches based on the uniformization technique have been proven to be computationally efficient for analysis of time-independent MPMs. In the case of time-dependent processes only few results concerning the performance of numerical techniques are known [2]. Here we present a method for conducting an analysis of MPMs that can have possibly infinitely many states and their dynamics is time-dependent. To cope with the problem we combine the ideas of on-the-y uniformization [5] with the method for treating timeinhomogeneous behaviour presented by Bucholz.
[72]
S. Byelozyorov, “Construction of Virtual Worlds with Web 2.0 Technology,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
Current Web technologies allow developers to create rich Web-applications. Unlike desktop applications Web 2.0 programs are created by easily linking several existing components. This approach, also known as mashup, allows to use JavaScript to connect web-services and browser components together. I have extended this development method by bringing 3D and virtual world networking components into the browser. This allowed me to create Virtual Worlds Web-application similar to Second Life. I have wrapped the open-source Sirikata platform for virtual worlds into a Web-service component, created XML3D rendering component, combined them with other browser services and thus created a fully-featured 3D world application right inside of the browser
Export
BibTeX
@mastersthesis{Byelozyorov2010, TITLE = {Construction of Virtual Worlds with Web 2.0 Technology}, AUTHOR = {Byelozyorov, Sergey}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Current Web technologies allow developers to create rich Web-applications. Unlike desktop applications Web 2.0 programs are created by easily linking several existing components. This approach, also known as mashup, allows to use JavaScript to connect web-services and browser components together. I have extended this development method by bringing 3D and virtual world networking components into the browser. This allowed me to create Virtual Worlds Web-application similar to Second Life. I have wrapped the open-source Sirikata platform for virtual worlds into a Web-service component, created XML3D rendering component, combined them with other browser services and thus created a fully-featured 3D world application right inside of the browser}, }
Endnote
%0 Thesis %A Byelozyorov, Sergey %Y Rubinstein, DimItri %A referee: Zeller, Andreas %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Construction of Virtual Worlds with Web 2.0 Technology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B0F1-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V master %9 master %X Current Web technologies allow developers to create rich Web-applications. Unlike desktop applications Web 2.0 programs are created by easily linking several existing components. This approach, also known as mashup, allows to use JavaScript to connect web-services and browser components together. I have extended this development method by bringing 3D and virtual world networking components into the browser. This allowed me to create Virtual Worlds Web-application similar to Second Life. I have wrapped the open-source Sirikata platform for virtual worlds into a Web-service component, created XML3D rendering component, combined them with other browser services and thus created a fully-featured 3D world application right inside of the browser
[73]
L. de la Garza, “Implementation and evaluation of an efficient, distributed replication algorithm in a real network,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@mastersthesis{delaGarza2010, TITLE = {Implementation and evaluation of an efficient, distributed replication algorithm in a real network}, AUTHOR = {de la Garza, Luis}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-D30860E168D99863C125778500372C38-delaGarza2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A de la Garza, Luis %Y Weikum, Gerhard %A referee: Sozio, Mauro %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Implementation and evaluation of an efficient, distributed replication algorithm in a real network : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1473-9 %F EDOC: 536379 %F OTHER: Local-ID: C1256DBF005F876D-D30860E168D99863C125778500372C38-delaGarza2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V master %9 master
[74]
S. Jansen, “Symmetry Detection in Images Using Belief Propagation,” Universität des Saarlandes, Saarbrücken, 2010.
Abstract
In this thesis a general approach for detection of symmetric structures in images is presented. Rather than relying on some feature points to extract symmetries, symmetries are described using a probabilistic formulation of image self-similarity. Using a Markov random field we obtain a joint probability distribution describing all assignments of the image to itself. Due to the high dimensionality of this joint distribution, we do not examine this distribution directly, but approximate its marginals in order to gather information about the symmetries with the image. In the case of perfect symmetries this approximation is done using belief propagation. A novel variant of belief propagation is introduced allowing for reliable approximations when dealing with approximate symmetries. We apply our approach to several images ranging from perfect synthetic symmetries to real-world scenarios, demonstrating the capabilities of probabilistic frameworks for symmetry detection.
Export
BibTeX
@mastersthesis{Jansen2010, TITLE = {Symmetry Detection in Images Using Belief Propagation}, AUTHOR = {Jansen, Silke}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-705AE7BF0843CDFDC1257823004C9D42-Jansen2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {In this thesis a general approach for detection of symmetric structures in images is presented. Rather than relying on some feature points to extract symmetries, symmetries are described using a probabilistic formulation of image self-similarity. Using a Markov random field we obtain a joint probability distribution describing all assignments of the image to itself. Due to the high dimensionality of this joint distribution, we do not examine this distribution directly, but approximate its marginals in order to gather information about the symmetries with the image. In the case of perfect symmetries this approximation is done using belief propagation. A novel variant of belief propagation is introduced allowing for reliable approximations when dealing with approximate symmetries. We apply our approach to several images ranging from perfect synthetic symmetries to real-world scenarios, demonstrating the capabilities of probabilistic frameworks for symmetry detection.}, }
Endnote
%0 Thesis %A Jansen, Silke %Y Wand, Michael %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Symmetry Detection in Images Using Belief Propagation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-145F-8 %F EDOC: 537325 %F OTHER: Local-ID: C125675300671F7B-705AE7BF0843CDFDC1257823004C9D42-Jansen2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %P X, 51 p. %V master %9 master %X In this thesis a general approach for detection of symmetric structures in images is presented. Rather than relying on some feature points to extract symmetries, symmetries are described using a probabilistic formulation of image self-similarity. Using a Markov random field we obtain a joint probability distribution describing all assignments of the image to itself. Due to the high dimensionality of this joint distribution, we do not examine this distribution directly, but approximate its marginals in order to gather information about the symmetries with the image. In the case of perfect symmetries this approximation is done using belief propagation. A novel variant of belief propagation is introduced allowing for reliable approximations when dealing with approximate symmetries. We apply our approach to several images ranging from perfect synthetic symmetries to real-world scenarios, demonstrating the capabilities of probabilistic frameworks for symmetry detection.
[75]
L. E. Kuhn Cuellar, “A probabilistic algorithm for matching protein structures - and its application to detecting functionally relevant patterns,” Universität des Saarlandes, Saabrücken, 2010.
Export
BibTeX
@mastersthesis{Kuhn2010a, TITLE = {A probabilistic algorithm for matching protein structures -- and its application to detecting functionally relevant patterns}, AUTHOR = {Kuhn Cuellar, Luis Eugenio}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125673F004B2D7B-6C92695A7AE2ABEEC1257834003E67F4-Kuhn2010a}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saabr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Kuhn Cuellar, Luis Eugenio %Y Sommer, Ingolf %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T A probabilistic algorithm for matching protein structures - and its application to detecting functionally relevant patterns : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-147B-A %F EDOC: 536615 %F OTHER: Local-ID: C125673F004B2D7B-6C92695A7AE2ABEEC1257834003E67F4-Kuhn2010a %I Universit&#228;t des Saarlandes %C Saabr&#252;cken %D 2010 %V master %9 master
[76]
F. Makari Manshadi, “Fast Distributed Replication in Modern Networks,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@mastersthesis{Makari-Manshadi2010, TITLE = {Fast Distributed Replication in Modern Networks}, AUTHOR = {Makari Manshadi, Faraz}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-540B1228EC63CB65C1257715003E7B21-Makari-Manshadi2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Makari Manshadi, Faraz %Y Weikum, Gerhard %A referee: Sozio, Mauro %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Fast Distributed Replication in Modern Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1470-F %F EDOC: 536367 %F OTHER: Local-ID: C1256DBF005F876D-540B1228EC63CB65C1257715003E7B21-Makari-Manshadi2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V master %9 master
[77]
T. Meiser, “Visualization Techniques for Rule-based Reasoning in Uncertain Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@mastersthesis{Meiser2010, TITLE = {Visualization Techniques for Rule-based Reasoning in Uncertain Knowledge Bases}, AUTHOR = {Meiser, Timm}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-79A43ACF8CBC126DC125781500517417-Meiser2010}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Meiser, Timm %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society %T Visualization Techniques for Rule-based Reasoning in Uncertain Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-144E-E %F EDOC: 536392 %F OTHER: Local-ID: C1256DBF005F876D-79A43ACF8CBC126DC125781500517417-Meiser2010 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V master %9 master
[78]
V. Setty, “Efficiently Identifying Interesting Time-Points in Text Archive Search,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@mastersthesis{Setty2010Master, TITLE = {Efficiently Identifying Interesting Time-Points in Text Archive Search}, AUTHOR = {Setty, Vinay}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-3DD9962C88FAA1C9C12576C40037DE66-Setty2010Master}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Setty, Vinay %Y Weikum, Gerhard %A referee: Bedathur, Srikanta %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficiently Identifying Interesting Time-Points in Text Archive Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-147E-4 %F EDOC: 536359 %F OTHER: Local-ID: C1256DBF005F876D-3DD9962C88FAA1C9C12576C40037DE66-Setty2010Master %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V master %9 master
[79]
M. Yahya, “Accelerating Rule-Based Reasoning in Disk-Resident RDF Knowledge Bases,” Universität des Saarlandes, Saarbrücken, 2010.
Export
BibTeX
@mastersthesis{Yahya10, TITLE = {Accelerating Rule-Based Reasoning in Disk-Resident {RDF} Knowledge Bases}, AUTHOR = {Yahya, Mohamed}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-5E0515E262BC394BC12577190038C333-Yahya10}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, }
Endnote
%0 Thesis %A Yahya, Mohamed %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society %T Accelerating Rule-Based Reasoning in Disk-Resident RDF Knowledge Bases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1451-4 %F EDOC: 536368 %F OTHER: Local-ID: C1256DBF005F876D-5E0515E262BC394BC12577190038C333-Yahya10 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2010 %V master %9 master
2009
[80]
A. Anand, “Index Partitioning Strategies for Peer-to-Peer Web Archival,” Universität des Saarlandes, Saarbrücken, 2009.
Export
BibTeX
@mastersthesis{anand09, TITLE = {Index Partitioning Strategies for Peer-to-Peer Web Archival}, AUTHOR = {Anand, Avishek}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-67C68AC79E28850EC12575BB003D3708-anand09}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, }
Endnote
%0 Thesis %A Anand, Avishek %Y Weikum, Gerhard %A referee: Bedathur, Srikanta %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Index Partitioning Strategies for Peer-to-Peer Web Archival : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17EE-E %F EDOC: 520420 %F OTHER: Local-ID: C1256DBF005F876D-67C68AC79E28850EC12575BB003D3708-anand09 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master
[81]
P. Danilewski, “Binned kd-tree Construction with SAH on the GPU,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Our main goal is to create realistically looking animation in real-time. To that end, we are interested in fast ray tracing. Ray tracing recursively traces photon movement from the camera (backward) or light sources (forward). To find where the first intersection between a given ray and the objects in the scene is we use acceleration structures, for example kd-trees. Kd-trees are considered to perform best in the majority of cases, however due to their large construction times are often avoided for dynamic scenes. In this work we try to overcome this obstacle by building the kd-tree in parallel on many cores of a GPU. Our algorithm build the kd-tree in a top-down breath-first fashion, with many threads processing each node of the tree. For each node we test 31 uniformly distributed candidate split planes along each axis and use the Surface Area cost function to estimate the best one. In order to reach maximum performance, the kd-tree construction is divided into 4 stages. Each of them handles tree nodes of different primitive count, differs in how counting is resolved and how work is distributed on the GPU. Our current program constructs kd-trees faster than other GPU implementations, while maintaining competing quality compared to serial CPU programs. Tests have shown that execution time scales well in respect to power of the GPU and it will most likely continue doing so with future releases of the hardware.
Export
BibTeX
@mastersthesis{Danilewski2009, TITLE = {Binned kd-tree Construction with {SAH} on the {GPU}}, AUTHOR = {Danilewski, Piotr}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Our main goal is to create realistically looking animation in real-time. To that end, we are interested in fast ray tracing. Ray tracing recursively traces photon movement from the camera (backward) or light sources (forward). To find where the first intersection between a given ray and the objects in the scene is we use acceleration structures, for example kd-trees. Kd-trees are considered to perform best in the majority of cases, however due to their large construction times are often avoided for dynamic scenes. In this work we try to overcome this obstacle by building the kd-tree in parallel on many cores of a GPU. Our algorithm build the kd-tree in a top-down breath-first fashion, with many threads processing each node of the tree. For each node we test 31 uniformly distributed candidate split planes along each axis and use the Surface Area cost function to estimate the best one. In order to reach maximum performance, the kd-tree construction is divided into 4 stages. Each of them handles tree nodes of different primitive count, differs in how counting is resolved and how work is distributed on the GPU. Our current program constructs kd-trees faster than other GPU implementations, while maintaining competing quality compared to serial CPU programs. Tests have shown that execution time scales well in respect to power of the GPU and it will most likely continue doing so with future releases of the hardware.}, }
Endnote
%0 Thesis %A Danilewski, Piotr %Y Slusallek, Philipp %A referee: Myszkowski, Karol %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Binned kd-tree Construction with SAH on the GPU : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B6D9-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master %X Our main goal is to create realistically looking animation in real-time. To that end, we are interested in fast ray tracing. Ray tracing recursively traces photon movement from the camera (backward) or light sources (forward). To find where the first intersection between a given ray and the objects in the scene is we use acceleration structures, for example kd-trees. Kd-trees are considered to perform best in the majority of cases, however due to their large construction times are often avoided for dynamic scenes. In this work we try to overcome this obstacle by building the kd-tree in parallel on many cores of a GPU. Our algorithm build the kd-tree in a top-down breath-first fashion, with many threads processing each node of the tree. For each node we test 31 uniformly distributed candidate split planes along each axis and use the Surface Area cost function to estimate the best one. In order to reach maximum performance, the kd-tree construction is divided into 4 stages. Each of them handles tree nodes of different primitive count, differs in how counting is resolved and how work is distributed on the GPU. Our current program constructs kd-trees faster than other GPU implementations, while maintaining competing quality compared to serial CPU programs. Tests have shown that execution time scales well in respect to power of the GPU and it will most likely continue doing so with future releases of the hardware.
[82]
O. Honcharova, “Static Detection of Parametric Loop Bounds on C Code,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for static derivation of precise WCET estimates is determination of upper bounds on the number of times loops can be iterated. The idea of the parametric loop bound analysis is to express the upper loop bound as a formula depending on parameters { variables and expressions staying constant within the loop body. The formula is constructed once for each loop. Then by instantiating this formula with values of parameters acquired externally (from value analysis, etc.), a concrete loop bound can be computed without high computational effort.
Export
BibTeX
@mastersthesis{Honcharova2009, TITLE = {Static Detection of Parametric Loop Bounds on {C} Code}, AUTHOR = {Honcharova, Olha}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for static derivation of precise WCET estimates is determination of upper bounds on the number of times loops can be iterated. The idea of the parametric loop bound analysis is to express the upper loop bound as a formula depending on parameters { variables and expressions staying constant within the loop body. The formula is constructed once for each loop. Then by instantiating this formula with values of parameters acquired externally (from value analysis, etc.), a concrete loop bound can be computed without high computational effort.}, }
Endnote
%0 Thesis %A Honcharova, Olha %Y Finkbeiner, Bernd %A referee: Martin, Florian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Static Detection of Parametric Loop Bounds on C Code : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BA7A-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master %X Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for static derivation of precise WCET estimates is determination of upper bounds on the number of times loops can be iterated. The idea of the parametric loop bound analysis is to express the upper loop bound as a formula depending on parameters { variables and expressions staying constant within the loop body. The formula is constructed once for each loop. Then by instantiating this formula with values of parameters acquired externally (from value analysis, etc.), a concrete loop bound can be computed without high computational effort.
[83]
A. Jindal, “Quality in Phrase Mining,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Phrase snippets of large text corpora like news articles or web search results offer great insight and analytical value. While much of the prior work is focussed on efficient storage and retrieval of all candidate phrases, little emphasis has been laid on the quality of the result set. In this thesis, we define phrases of interest and propose a framework for mining and post-processing interesting phrases. We focus on the quality of phrases and develop techniques to mine minimal-length maximal-informative sequences of words.The techniques developed are streamed into a post-processing pipeline and include exact and approximate match-based merging, incomplete phrase detection with filtering, and heuristics-based phrase classification. The strategies aim to prune the candidate set of phrases down to the ones being meaningful and having rich content. We characterize the phrases with heuristics- and NLP-based features. We use a supervised learning based regression model to predict their interestingness. Further, we develop and analyze ranking and grouping models for presenting the phrases to the user. Finally, we discuss relevance and performance evaluation of our techniques. Our framework is evaluated using a recently released real world corpus of New York Times news articles.
Export
BibTeX
@mastersthesis{Jindal2010, TITLE = {Quality in Phrase Mining}, AUTHOR = {Jindal, Alekh}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Phrase snippets of large text corpora like news articles or web search results offer great insight and analytical value. While much of the prior work is focussed on efficient storage and retrieval of all candidate phrases, little emphasis has been laid on the quality of the result set. In this thesis, we define phrases of interest and propose a framework for mining and post-processing interesting phrases. We focus on the quality of phrases and develop techniques to mine minimal-length maximal-informative sequences of words.The techniques developed are streamed into a post-processing pipeline and include exact and approximate match-based merging, incomplete phrase detection with filtering, and heuristics-based phrase classification. The strategies aim to prune the candidate set of phrases down to the ones being meaningful and having rich content. We characterize the phrases with heuristics- and NLP-based features. We use a supervised learning based regression model to predict their interestingness. Further, we develop and analyze ranking and grouping models for presenting the phrases to the user. Finally, we discuss relevance and performance evaluation of our techniques. Our framework is evaluated using a recently released real world corpus of New York Times news articles.}, }
Endnote
%0 Thesis %A Jindal, Alekh %Y Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Quality in Phrase Mining : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BA7D-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master %X Phrase snippets of large text corpora like news articles or web search results offer great insight and analytical value. While much of the prior work is focussed on efficient storage and retrieval of all candidate phrases, little emphasis has been laid on the quality of the result set. In this thesis, we define phrases of interest and propose a framework for mining and post-processing interesting phrases. We focus on the quality of phrases and develop techniques to mine minimal-length maximal-informative sequences of words.The techniques developed are streamed into a post-processing pipeline and include exact and approximate match-based merging, incomplete phrase detection with filtering, and heuristics-based phrase classification. The strategies aim to prune the candidate set of phrases down to the ones being meaningful and having rich content. We characterize the phrases with heuristics- and NLP-based features. We use a supervised learning based regression model to predict their interestingness. Further, we develop and analyze ranking and grouping models for presenting the phrases to the user. Finally, we discuss relevance and performance evaluation of our techniques. Our framework is evaluated using a recently released real world corpus of New York Times news articles.
[84]
J. Kalojanov, “Parallel and Lazy Construction of Grids for Ray Tracing on Graphics Hardware,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
In this thesis we investigate the use of uniform grids as acceleration structures for ray tracing on data-parallel machines such as modern graphics processors. The main focus of this work is the trade-off between construction time and rendering performance provided by the acceleration structures, which is important for rendering dynamic scenes. We propose several parallel construction algorithms for uniform and two-level grids as well as a ray triangle intersection algorithm, which improves SIMD utilization for incoherent rays. The result of this work is a GPU ray tracer with performance for dynamic scenes that is comparable and in some cases better than the best known implementations today.
Export
BibTeX
@mastersthesis{Kalojanov2009, TITLE = {Parallel and Lazy Construction of Grids for Ray Tracing on Graphics Hardware}, AUTHOR = {Kalojanov, Javor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {In this thesis we investigate the use of uniform grids as acceleration structures for ray tracing on data-parallel machines such as modern graphics processors. The main focus of this work is the trade-off between construction time and rendering performance provided by the acceleration structures, which is important for rendering dynamic scenes. We propose several parallel construction algorithms for uniform and two-level grids as well as a ray triangle intersection algorithm, which improves SIMD utilization for incoherent rays. The result of this work is a GPU ray tracer with performance for dynamic scenes that is comparable and in some cases better than the best known implementations today.}, }
Endnote
%0 Thesis %A Kalojanov, Javor %Y Slusallek, Philipp %A referee: Wand, Michael %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Parallel and Lazy Construction of Grids for Ray Tracing on Graphics Hardware : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BA80-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master %X In this thesis we investigate the use of uniform grids as acceleration structures for ray tracing on data-parallel machines such as modern graphics processors. The main focus of this work is the trade-off between construction time and rendering performance provided by the acceleration structures, which is important for rendering dynamic scenes. We propose several parallel construction algorithms for uniform and two-level grids as well as a ray triangle intersection algorithm, which improves SIMD utilization for incoherent rays. The result of this work is a GPU ray tracer with performance for dynamic scenes that is comparable and in some cases better than the best known implementations today.
[85]
M. Khosla, “Message Passing Algorithms,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Constraint Satisfaction Problems (CSPs) are defined over a set of variables whose state must satisfy a number of constraints. We study a class of algorithms called Message Passing Algorithms, which aim at finding the probability distribution of the variables over the space of satisfying assignments. These algorithms involve passing local messages (according to some message update rules) over the edges of a factor graph constructed corresponding to the CSP. We focus on the Belief Propagation (BP) algorithm, which finds exact solution marginals for tree-like factor graphs. However, convergence and exactness cannot be guaranteed for a general factor graph. We propose a method for improving BP to account for cycles in the factor graph. We also study another message passing algorithm known as Survey Propagation (SP), which is empirically quite effective in solving random K-SAT instances, even when the density is close to the satisfiability threshold. We contribute to the theoretical understanding of SP by deriving the SP equations from the BP message update rules.
Export
BibTeX
@mastersthesis{Khosla2009, TITLE = {Message Passing Algorithms}, AUTHOR = {Khosla, Megha}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Constraint Satisfaction Problems (CSPs) are defined over a set of variables whose state must satisfy a number of constraints. We study a class of algorithms called Message Passing Algorithms, which aim at finding the probability distribution of the variables over the space of satisfying assignments. These algorithms involve passing local messages (according to some message update rules) over the edges of a factor graph constructed corresponding to the CSP. We focus on the Belief Propagation (BP) algorithm, which finds exact solution marginals for tree-like factor graphs. However, convergence and exactness cannot be guaranteed for a general factor graph. We propose a method for improving BP to account for cycles in the factor graph. We also study another message passing algorithm known as Survey Propagation (SP), which is empirically quite effective in solving random K-SAT instances, even when the density is close to the satisfiability threshold. We contribute to the theoretical understanding of SP by deriving the SP equations from the BP message update rules.}, }
Endnote
%0 Thesis %A Khosla, Megha %Y Panagiotou, Konstantinos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Message Passing Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BA83-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master %X Constraint Satisfaction Problems (CSPs) are defined over a set of variables whose state must satisfy a number of constraints. We study a class of algorithms called Message Passing Algorithms, which aim at finding the probability distribution of the variables over the space of satisfying assignments. These algorithms involve passing local messages (according to some message update rules) over the edges of a factor graph constructed corresponding to the CSP. We focus on the Belief Propagation (BP) algorithm, which finds exact solution marginals for tree-like factor graphs. However, convergence and exactness cannot be guaranteed for a general factor graph. We propose a method for improving BP to account for cycles in the factor graph. We also study another message passing algorithm known as Survey Propagation (SP), which is empirically quite effective in solving random K-SAT instances, even when the density is close to the satisfiability threshold. We contribute to the theoretical understanding of SP by deriving the SP equations from the BP message update rules.
[86]
D. Puzhay, “Modeling Bug Reporter Reputation,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Tracking and resolving of software bugs are very important tasks for software developers and maintainers. Bug-tracking systems are tools which are widely used in open source projects to support these activities. The empirical Software Engineering research community pays considerable attention to bug-tracking-related topics in order to provide bug-tracking systems users with adequate software and tool support. Bug-tracking is a highly socialized process which requires constant communication between developers and bug reporters. However, the inherent social structure of bug tracking systems and its influence on everyday bug-tracking has earlier been poorly studied. In this work I address the role of bug reporter reputation. Using publicly available information from bug-tracking system database, I model bug reporter reputation to check whether there is any evidence of relation between reporter reputation and attention from developers his bugs get. If reputation actually plays important role in bug-tracking activities and can relatively easily be extracted, existing prediction techniques could potentially be improved by using reputation as additional input variable; bug-tracking software could be supported with more formal notion of reporter reputation.
Export
BibTeX
@mastersthesis{Puzhay2009, TITLE = {Modeling Bug Reporter Reputation}, AUTHOR = {Puzhay, Dmytro}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Tracking and resolving of software bugs are very important tasks for software developers and maintainers. Bug-tracking systems are tools which are widely used in open source projects to support these activities. The empirical Software Engineering research community pays considerable attention to bug-tracking-related topics in order to provide bug-tracking systems users with adequate software and tool support. Bug-tracking is a highly socialized process which requires constant communication between developers and bug reporters. However, the inherent social structure of bug tracking systems and its influence on everyday bug-tracking has earlier been poorly studied. In this work I address the role of bug reporter reputation. Using publicly available information from bug-tracking system database, I model bug reporter reputation to check whether there is any evidence of relation between reporter reputation and attention from developers his bugs get. If reputation actually plays important role in bug-tracking activities and can relatively easily be extracted, existing prediction techniques could potentially be improved by using reputation as additional input variable; bug-tracking software could be supported with more formal notion of reporter reputation.}, }
Endnote
%0 Thesis %A Puzhay, Dmytro %Y Wilhelm, Reinhard %A referee: Premraj, Rahul %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Modeling Bug Reporter Reputation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BA88-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master %X Tracking and resolving of software bugs are very important tasks for software developers and maintainers. Bug-tracking systems are tools which are widely used in open source projects to support these activities. The empirical Software Engineering research community pays considerable attention to bug-tracking-related topics in order to provide bug-tracking systems users with adequate software and tool support. Bug-tracking is a highly socialized process which requires constant communication between developers and bug reporters. However, the inherent social structure of bug tracking systems and its influence on everyday bug-tracking has earlier been poorly studied. In this work I address the role of bug reporter reputation. Using publicly available information from bug-tracking system database, I model bug reporter reputation to check whether there is any evidence of relation between reporter reputation and attention from developers his bugs get. If reputation actually plays important role in bug-tracking activities and can relatively easily be extracted, existing prediction techniques could potentially be improved by using reputation as additional input variable; bug-tracking software could be supported with more formal notion of reporter reputation.
[87]
R. Ragneala, “A Useful Resource for Defect Prediction Models,” Universität des Saarlandes, Saarbrücken, 2009.
Abstract
Predicting likely software defects in the future is valuable for project managers when planning resource allocation for software testing. But building prediction models using only code metrics may not be suffice for accurate results. In this work, we investigate the value of code history metrics that can be collected from the project's version archives for the purpose of defect prediction. Our results suggest that prediction models built using code history metrics outperform those using traditional code metrics only.
Export
BibTeX
@mastersthesis{Ragneala2009, TITLE = {A Useful Resource for Defect Prediction Models}, AUTHOR = {Ragneala, Roxana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, ABSTRACT = {Predicting likely software defects in the future is valuable for project managers when planning resource allocation for software testing. But building prediction models using only code metrics may not be suffice for accurate results. In this work, we investigate the value of code history metrics that can be collected from the project's version archives for the purpose of defect prediction. Our results suggest that prediction models built using code history metrics outperform those using traditional code metrics only.}, }
Endnote
%0 Thesis %A Ragneala, Roxana %Y Zeller, Andreas %A referee: Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T A Useful Resource for Defect Prediction Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BA8D-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master %X Predicting likely software defects in the future is valuable for project managers when planning resource allocation for software testing. But building prediction models using only code metrics may not be suffice for accurate results. In this work, we investigate the value of code history metrics that can be collected from the project's version archives for the purpose of defect prediction. Our results suggest that prediction models built using code history metrics outperform those using traditional code metrics only.
[88]
C. Rizkallah, “Proof Representations for Higher-Order Logic,” Universität des Saarlandes, Saarbrücken, 2009.
Export
BibTeX
@mastersthesis{Rizkallah2009, TITLE = {Proof Representations for Higher-Order Logic}, AUTHOR = {Rizkallah, Christine}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, }
Endnote
%0 Thesis %A Rizkallah, Christine %Y Brown, Chad %A referee: Smolka, Gert %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Proof Representations for Higher-Order Logic : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BA90-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master
[89]
T. Tylenda, “Time-aware Link Prediction in Evolving Social Networks,” Universität des Saarlandes, Saarbrücken, 2009.
Export
BibTeX
@mastersthesis{tylenda09, TITLE = {Time-aware Link Prediction in Evolving Social Networks}, AUTHOR = {Tylenda, Tomasz}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-206CBE96EFA630DEC1257553004EF89D-tylenda09}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2009}, DATE = {2009}, }
Endnote
%0 Thesis %A Tylenda, Tomasz %Y Weikum, Gerhard %A referee: Bedathur, Srikanta %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Time-aware Link Prediction in Evolving Social Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-17CD-7 %F EDOC: 520431 %F OTHER: Local-ID: C1256DBF005F876D-206CBE96EFA630DEC1257553004EF89D-tylenda09 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2009 %V master %9 master
2008
[90]
L. M. Andreescu, “Pricing Information Goods in an Agent-based Information Filtering System,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{Andreescu08, TITLE = {Pricing Information Goods in an Agent-based Information Filtering System}, AUTHOR = {Andreescu, Laura Maria}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-55AF99F2B1A2EFFFC12575350042D9D0-Andreescu08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Andreescu, Laura Maria %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Pricing Information Goods in an Agent-based Information Filtering System : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A9E-A %F EDOC: 428293 %F OTHER: Local-ID: C125756E0038A185-55AF99F2B1A2EFFFC12575350042D9D0-Andreescu08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[91]
O.-M. Ciobotaru, “Efficient lont-term Secure Universally Composable Commitments,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
Long-term security ensures that a protocol remains secure even in the future, when the adversarial computational power could, potentially, become unlimited. The notion of universal composability preserves the security of a cryptographic protocol when it is used in combination with any other protocols, in possibly complex systems. The area of long-term universally composable secure protocols has been developed mostly by Müller-Quade and Unruh. Their research conducted so far has shown the existence of secure long-term UC commitments under general cryptographic assumptions, thus without having an emphasis on the efficiency of the protocols designed. Building on their work and using very efficient zero-knowledge proofs of knowledge from [CL02], this thesis presents a new long-term universally composable secure commitment protocol that is both efficient and plausible to use in practice.
Export
BibTeX
@mastersthesis{Ciobotaru2008, TITLE = {Efficient lont-term Secure Universally Composable Commitments}, AUTHOR = {Ciobotaru, Oana-Madalina}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Long-term security ensures that a protocol remains secure even in the future, when the adversarial computational power could, potentially, become unlimited. The notion of universal composability preserves the security of a cryptographic protocol when it is used in combination with any other protocols, in possibly complex systems. The area of long-term universally composable secure protocols has been developed mostly by M{\"u}ller-Quade and Unruh. Their research conducted so far has shown the existence of secure long-term UC commitments under general cryptographic assumptions, thus without having an emphasis on the efficiency of the protocols designed. Building on their work and using very efficient zero-knowledge proofs of knowledge from [CL02], this thesis presents a new long-term universally composable secure commitment protocol that is both efficient and plausible to use in practice.}, }
Endnote
%0 Thesis %A Ciobotaru, Oana-Madalina %A referee: Unruh, Dominique %Y Backes, Michael %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Efficient lont-term Secure Universally Composable Commitments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BABF-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master %X Long-term security ensures that a protocol remains secure even in the future, when the adversarial computational power could, potentially, become unlimited. The notion of universal composability preserves the security of a cryptographic protocol when it is used in combination with any other protocols, in possibly complex systems. The area of long-term universally composable secure protocols has been developed mostly by M&#252;ller-Quade and Unruh. Their research conducted so far has shown the existence of secure long-term UC commitments under general cryptographic assumptions, thus without having an emphasis on the efficiency of the protocols designed. Building on their work and using very efficient zero-knowledge proofs of knowledge from [CL02], this thesis presents a new long-term universally composable secure commitment protocol that is both efficient and plausible to use in practice.
[92]
M. Dudev, “Personalization of Search on Structured Data,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{dudev08, TITLE = {Personalization of Search on Structured Data}, AUTHOR = {Dudev, Minko}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-7EBBA7D368754138C12574C900472DFD-dudev08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Dudev, Minko %Y Weikum, Gerhard %A referee: Zeller, Andreas %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Personalization of Search on Structured Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1CA6-C %F EDOC: 428294 %F OTHER: Local-ID: C125756E0038A185-7EBBA7D368754138C12574C900472DFD-dudev08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[93]
Q. Gao, “Low Bit Rate Video Compression Using Inpainting PDEs and Optic Flow,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{GaoMaster08, TITLE = {Low Bit Rate Video Compression Using Inpainting {PDE}s and Optic Flow}, AUTHOR = {Gao, Qi}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-A8F6E9D8D03F66B3C1257590002ABCDE-GaoMaster08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Gao, Qi %Y Weikert, Joachim %A referee: Bruhn, Andres %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Low Bit Rate Video Compression Using Inpainting PDEs and Optic Flow : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A7E-1 %F EDOC: 428295 %F OTHER: Local-ID: C125756E0038A185-A8F6E9D8D03F66B3C1257590002ABCDE-GaoMaster08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[94]
I. Georgiev, “RTfact Concepts for Generic Ray Tracing,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
For a long time now, interactive 3D graphics has been dominated by rasterization algorithms. However, thanks to more than a decade of research and the fast evolution of computer hardware, ray tracing has recently achieved real-time performance. Thus, it is likely that ray tracing will become a commodity choice for adding complex lighting effects to real-time rendering engines. Nonetheless, interactive ray tracing research has been mostly concentrated on few specific combinations of algorithms and data structures. In this thesis we present RTfact (an attempt to bring the different aspects of ray tracing together in a component oriented, generic, and portable way, without sacrificing the performance benefits of hand-tuned single-purpose implementations. RTfact is a template library consisting of packet-centric components combined into an ecient ray tracing framework. Our generic design approach with loosely coupled algorithms and data structures allows for seamless integration of new algorithms with maximum runtime performance, while leveraging as much of the existing code base as possible. The SIMD abstraction layer of RTfact enables easy porting to new microprocessor architectures with wider SIMD instruction sets without the need of modifying existing code. The eciency of C++ templates allows us to achieve fine component granularity and to incorporate a flexible physically- based surface shading model, which enables exploitation of ray coherence. As a proof of concept we apply the library to a variety of rendering tasks and demonstrate its ability to deliver performance equal to existing optimized implementations.
Export
BibTeX
@mastersthesis{Georgiev2008, TITLE = {{RT}fact Concepts for Generic Ray Tracing}, AUTHOR = {Georgiev, Iliyan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {For a long time now, interactive 3D graphics has been dominated by rasterization algorithms. However, thanks to more than a decade of research and the fast evolution of computer hardware, ray tracing has recently achieved real-time performance. Thus, it is likely that ray tracing will become a commodity choice for adding complex lighting effects to real-time rendering engines. Nonetheless, interactive ray tracing research has been mostly concentrated on few specific combinations of algorithms and data structures. In this thesis we present RTfact (an attempt to bring the different aspects of ray tracing together in a component oriented, generic, and portable way, without sacrificing the performance benefits of hand-tuned single-purpose implementations. RTfact is a template library consisting of packet-centric components combined into an ecient ray tracing framework. Our generic design approach with loosely coupled algorithms and data structures allows for seamless integration of new algorithms with maximum runtime performance, while leveraging as much of the existing code base as possible. The SIMD abstraction layer of RTfact enables easy porting to new microprocessor architectures with wider SIMD instruction sets without the need of modifying existing code. The eciency of C++ templates allows us to achieve fine component granularity and to incorporate a flexible physically- based surface shading model, which enables exploitation of ray coherence. As a proof of concept we apply the library to a variety of rendering tasks and demonstrate its ability to deliver performance equal to existing optimized implementations.}, }
Endnote
%0 Thesis %A Georgiev, Iliyan %Y Slusallek, Philipp %A referee: Hack, Sebastian %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T RTfact Concepts for Generic Ray Tracing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BB2C-F %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master %X For a long time now, interactive 3D graphics has been dominated by rasterization algorithms. However, thanks to more than a decade of research and the fast evolution of computer hardware, ray tracing has recently achieved real-time performance. Thus, it is likely that ray tracing will become a commodity choice for adding complex lighting effects to real-time rendering engines. Nonetheless, interactive ray tracing research has been mostly concentrated on few specific combinations of algorithms and data structures. In this thesis we present RTfact (an attempt to bring the different aspects of ray tracing together in a component oriented, generic, and portable way, without sacrificing the performance benefits of hand-tuned single-purpose implementations. RTfact is a template library consisting of packet-centric components combined into an ecient ray tracing framework. Our generic design approach with loosely coupled algorithms and data structures allows for seamless integration of new algorithms with maximum runtime performance, while leveraging as much of the existing code base as possible. The SIMD abstraction layer of RTfact enables easy porting to new microprocessor architectures with wider SIMD instruction sets without the need of modifying existing code. The eciency of C++ templates allows us to achieve fine component granularity and to incorporate a flexible physically- based surface shading model, which enables exploitation of ray coherence. As a proof of concept we apply the library to a variety of rendering tasks and demonstrate its ability to deliver performance equal to existing optimized implementations.
[95]
M. A. Granados Velásquez, “Background Estimation from Photographs with Application to Ghost Removal in High Dynamic Range Image Reconstruction,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{GranadosMaster08, TITLE = {Background Estimation from Photographs with Application to Ghost Removal in High Dynamic Range Image Reconstruction}, AUTHOR = {Granados Vel{\'a}squez, Miguel Andr{\'e}s}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-1A9849076B83BAE3C1257590002ADC6E-GranadosMaster08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Granados Vel&#225;squez, Miguel Andr&#233;s %Y Slusallek, Philipp %A referee: Lensch, Hendrik P. A. %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society %T Background Estimation from Photographs with Application to Ghost Removal in High Dynamic Range Image Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A7A-9 %F EDOC: 428297 %F OTHER: Local-ID: C125756E0038A185-1A9849076B83BAE3C1257590002ADC6E-GranadosMaster08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[96]
M. L. Hamerlik, “Anonymity and Censorship Resistance in Semantic Overlay Networks,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{HamerlikMaster08, TITLE = {Anonymity and Censorship Resistance in Semantic Overlay Networks}, AUTHOR = {Hamerlik, Marek Lech}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-BE5670609AEB3F01C1257590002B14E2-HamerlikMaster08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Hamerlik, Marek Lech %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society %T Anonymity and Censorship Resistance in Semantic Overlay Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1A78-D %F EDOC: 428298 %F OTHER: Local-ID: C125756E0038A185-BE5670609AEB3F01C1257590002B14E2-HamerlikMaster08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[97]
S. Holder, “Replication in Unstructured Peer-to-Peer Networks with Availability Constraints,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{Holder08, TITLE = {Replication in Unstructured Peer-to-Peer Networks with Availability Constraints}, AUTHOR = {Holder, Stefan}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-D791C134663D6D6AC12574CC003593E6-Holder08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Holder, Stefan %Y Weikum, Gerhard %A referee: Wilhelm, Reinhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Replication in Unstructured Peer-to-Peer Networks with Availability Constraints : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AA4-9 %F EDOC: 428299 %F OTHER: Local-ID: C125756E0038A185-D791C134663D6D6AC12574CC003593E6-Holder08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[98]
L. Kasradze, “Implementation of a File-based Indexing Framework for the TopX Search Engine,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{kasradze07, TITLE = {Implementation of a File-based Indexing Framework for the {TopX} Search Engine}, AUTHOR = {Kasradze, Levan}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-EF4294FB083A0396C1257456003A6530-kasradze07}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Kasradze, Levan %Y Weikum, Gerhard %A referee: Bast, Hannah %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Implementation of a File-based Indexing Framework for the TopX Search Engine : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AAE-6 %F EDOC: 428300 %F OTHER: Local-ID: C125756E0038A185-EF4294FB083A0396C1257456003A6530-kasradze07 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[99]
G. Manolache, “Index-based Snippet Generation,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{manolache08, TITLE = {Index-based Snippet Generation}, AUTHOR = {Manolache, Gabriel}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-515D17CBD7A6628FC125747200462E5B-manolache08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Manolache, Gabriel %Y Bast, Hannah %A referee: Weikum, Gerhard %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Index-based Snippet Generation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1BEE-0 %F EDOC: 428303 %F OTHER: Local-ID: C125756E0038A185-515D17CBD7A6628FC125747200462E5B-manolache08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[100]
H. Peters, “Hardware and Software Extensions for a FTIR Multi-Touch Interface,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{Peters2008, TITLE = {Hardware and Software Extensions for a {FTIR} Multi-Touch Interface}, AUTHOR = {Peters, Henning}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Peters, Henning %Y Lensch, Hendrik %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Hardware and Software Extensions for a FTIR Multi-Touch Interface : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BB73-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[101]
M. Rusinov, “Homomorphism Homogeneous Graphs,” Universität des Saarlandes, Saarbrücken, 2008.
Abstract
Homogeneous structures are a well studied research area and have variety uses like constructions in model theory and permutation group theory. Recently Cameron and Nesetril have introduced homomorphism homogeneity by incorporating homomorphisms in the definition of homogeneity. This has attracted a fair bit of attention from the research community and a growing amount of research has been done in this area for different relational structures. The first goal of this thesis is to investigate the different classes of homomorphism homogeneous simple undirected graphs with respect to different kinds of homomorphisms and study the relations between these classes. Although homogeneous graphs are heavily analyzed, little has been done for homomorphism homogeneous graphs. Cameron and Nesetril posed two open questions when they first defined these graphs. We answer both questions and also attempt to classify the homomorphism homogeneous graphs. This, we believe, opens up future possibilities for more analysis of these structures. In the thesis we also treat the category of graphs with loop allowed and further extend the idea of homogeneity by expanding the list of homomorphisms that are taken into consideration in the definitions.
Export
BibTeX
@mastersthesis{Rusinov2008, TITLE = {Homomorphism Homogeneous Graphs}, AUTHOR = {Rusinov, Momchil}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Homogeneous structures are a well studied research area and have variety uses like constructions in model theory and permutation group theory. Recently Cameron and Nesetril have introduced homomorphism homogeneity by incorporating homomorphisms in the definition of homogeneity. This has attracted a fair bit of attention from the research community and a growing amount of research has been done in this area for different relational structures. The first goal of this thesis is to investigate the different classes of homomorphism homogeneous simple undirected graphs with respect to different kinds of homomorphisms and study the relations between these classes. Although homogeneous graphs are heavily analyzed, little has been done for homomorphism homogeneous graphs. Cameron and Nesetril posed two open questions when they first defined these graphs. We answer both questions and also attempt to classify the homomorphism homogeneous graphs. This, we believe, opens up future possibilities for more analysis of these structures. In the thesis we also treat the category of graphs with loop allowed and further extend the idea of homogeneity by expanding the list of homomorphisms that are taken into consideration in the definitions.}, }
Endnote
%0 Thesis %A Rusinov, Momchil %Y Mehlhorn, Kurt %A referee: Bl&#228;ser, Markus %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Homomorphism Homogeneous Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BB7C-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master %X Homogeneous structures are a well studied research area and have variety uses like constructions in model theory and permutation group theory. Recently Cameron and Nesetril have introduced homomorphism homogeneity by incorporating homomorphisms in the definition of homogeneity. This has attracted a fair bit of attention from the research community and a growing amount of research has been done in this area for different relational structures. The first goal of this thesis is to investigate the different classes of homomorphism homogeneous simple undirected graphs with respect to different kinds of homomorphisms and study the relations between these classes. Although homogeneous graphs are heavily analyzed, little has been done for homomorphism homogeneous graphs. Cameron and Nesetril posed two open questions when they first defined these graphs. We answer both questions and also attempt to classify the homomorphism homogeneous graphs. This, we believe, opens up future possibilities for more analysis of these structures. In the thesis we also treat the category of graphs with loop allowed and further extend the idea of homogeneity by expanding the list of homomorphisms that are taken into consideration in the definitions.
[102]
R. Socher, “A Learning-Based Hierarchical Model for Vessel Segmentation,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{socher08, TITLE = {A Learning-Based Hierarchical Model for Vessel Segmentation}, AUTHOR = {Socher, Richard}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-46F39494700CC85AC12574AB00351D0B-socher08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Socher, Richard %Y Weikum, Gerhard %A referee: Weikert, Joachim %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Machine Learning, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T A Learning-Based Hierarchical Model for Vessel Segmentation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AA8-1 %F EDOC: 428309 %F OTHER: Local-ID: C125756E0038A185-46F39494700CC85AC12574AB00351D0B-socher08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %V master %9 master
[103]
B. Taneva, “Conjoint Analysis: A Tool for Preference Analysis,” Universität des Saarlandes, Saarbrücken, 2008.
Export
BibTeX
@mastersthesis{TanevaMaster08, TITLE = {Conjoint Analysis: A Tool for Preference Analysis}, AUTHOR = {Taneva, Bilyana}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-F74EE865F73CA9F0C1257590002B6F7B-TanevaMaster08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, }
Endnote
%0 Thesis %A Taneva, Bilyana %Y Giesen, Joachim %A referee: Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Conjoint Analysis: A Tool for Preference Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1AAC-A %F EDOC: 428310 %F OTHER: Local-ID: C125756E0038A185-F74EE865F73CA9F0C1257590002B6F7B-TanevaMaster08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2008 %P 54 S. %V master %9 master
2007
[104]
V. Alvarez Amaya, “Approximation of Minimum Spanning Trees of Set of Points in the Hausdorff Metric,” Universität des Saarlandes, 2007.
Export
BibTeX
@mastersthesis{Alvarez2007, TITLE = {Approximation of Minimum Spanning Trees of Set of Points in the Hausdorff Metric}, AUTHOR = {Alvarez Amaya, Victor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, YEAR = {2007}, DATE = {2007}, }
Endnote
%0 Thesis %A Alvarez Amaya, Victor %Y Seidel, Raimund %A referee: Funke, Stefan %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximation of Minimum Spanning Trees of Set of Points in the Hausdorff Metric : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-BB8E-2 %I Universit&#228;t des Saarlandes %D 2007 %V master %9 master
[105]
J. Bogojeska, “Stability Analysis of Oncogenetic Trees Mixture Models,” Universität des Saarlandes, Saarbrücken, 2007.
Export
BibTeX
@mastersthesis{Bogojeska2007a, TITLE = {Stability Analysis of Oncogenetic Trees Mixture Models}, AUTHOR = {Bogojeska, Jasmina}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-7A5625CE2DBE4839C1257283004617FD-Bogojeska2007a}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, }
Endnote
%0 Thesis %A Bogojeska, Jasmina %Y Rahnenf&#252;hrer, J&#246;rg %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Stability Analysis of Oncogenetic Trees Mixture Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1DC3-2 %F EDOC: 356591 %F OTHER: Local-ID: C12573CC004A8E26-7A5625CE2DBE4839C1257283004617FD-Bogojeska2007a %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master
[106]
I. I. Brudaru, “Heuristics for Average Diameter Approximation with External Memory Algorithms,” Universität des Saarlandes, Saarbrücken, 2007.
Export
BibTeX
@mastersthesis{Brudaru2007, TITLE = {Heuristics for Average Diameter Approximation with External Memory Algorithms}, AUTHOR = {Brudaru, Irina Ioana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, }
Endnote
%0 Thesis %A Brudaru, Irina Ioana %Y Meyer, Ulrich %A referee: Funke, Stefan %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Heuristics for Average Diameter Approximation with External Memory Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-C325-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master
[107]
M. Celikik, “Efficient Large-Scale Clustering of Spelling Variants, with Applications to Error-Tolerant Text Search,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
In this thesis, the following spelling variants clustering problem is considered: Given a list of distinct words, called lexicon, compute (possibly overlapping) clusters of words which are spelling variants of each other. We are looking for algorithms that are both efficient and accurate. Accuracy is measured with respect to human judgment, e.g., a cluster is 100 accurate if it contains all true spelling variants of the unique correct word it contains and no other words, as judged by a human. We have sifted the large body of literature on approximate string searching and spelling correction problem for its applicability to our problem. We have combined various ideas from previous approaches to two new algorithms, with two distinctly different trade-offs between efficiency and accuracy. We have analyzed both algorithms and tested them experimentally on a variety of test collections, which were chosen to exhibit the whole spectrum of spelling errors as they occur in practice (human-made, OCR-induced, garbage). Our largest lexicon, containing roughly 25 million words, can be processed in half an hour on a single machine. The accuracies we obtain range from 88 - 95. We show that previous approaches, if directly applied to our problem, are either significantly slower or significantly less accurate or both. Our spelling variants clustering problem arises naturally in the context of search engine spelling correction of the following kind: For a given query, return not only documents matching the query words exactly but also those matching their spelling variants. This is inverse to the well-known �did you mean: ...� web search engine feature, where the error tolerance is on the side of the query, and not on the side of the documents. We have integrated our algorithms with the CompleteSearch engine, and show that this feature can be achieved without significant blowup in either index size or query processing time.
Export
BibTeX
@mastersthesis{Celikik2007, TITLE = {Efficient Large-Scale Clustering of Spelling Variants, with Applications to Error-Tolerant Text Search}, AUTHOR = {Celikik, Marjan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {In this thesis, the following spelling variants clustering problem is considered: Given a list of distinct words, called lexicon, compute (possibly overlapping) clusters of words which are spelling variants of each other. We are looking for algorithms that are both efficient and accurate. Accuracy is measured with respect to human judgment, e.g., a cluster is 100 accurate if it contains all true spelling variants of the unique correct word it contains and no other words, as judged by a human. We have sifted the large body of literature on approximate string searching and spelling correction problem for its applicability to our problem. We have combined various ideas from previous approaches to two new algorithms, with two distinctly different trade-offs between efficiency and accuracy. We have analyzed both algorithms and tested them experimentally on a variety of test collections, which were chosen to exhibit the whole spectrum of spelling errors as they occur in practice (human-made, OCR-induced, garbage). Our largest lexicon, containing roughly 25 million words, can be processed in half an hour on a single machine. The accuracies we obtain range from 88 -- 95. We show that previous approaches, if directly applied to our problem, are either significantly slower or significantly less accurate or both. Our spelling variants clustering problem arises naturally in the context of search engine spelling correction of the following kind: For a given query, return not only documents matching the query words exactly but also those matching their spelling variants. This is inverse to the well-known {\diamond}did you mean: ...{\diamond} web search engine feature, where the error tolerance is on the side of the query, and not on the side of the documents. We have integrated our algorithms with the CompleteSearch engine, and show that this feature can be achieved without significant blowup in either index size or query processing time.}, }
Endnote
%0 Thesis %A Celikik, Marjan %Y Weikum, Gerhard %A referee: Bast, Holger %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Efficient Large-Scale Clustering of Spelling Variants, with Applications to Error-Tolerant Text Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-C33D-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X In this thesis, the following spelling variants clustering problem is considered: Given a list of distinct words, called lexicon, compute (possibly overlapping) clusters of words which are spelling variants of each other. We are looking for algorithms that are both efficient and accurate. Accuracy is measured with respect to human judgment, e.g., a cluster is 100 accurate if it contains all true spelling variants of the unique correct word it contains and no other words, as judged by a human. We have sifted the large body of literature on approximate string searching and spelling correction problem for its applicability to our problem. We have combined various ideas from previous approaches to two new algorithms, with two distinctly different trade-offs between efficiency and accuracy. We have analyzed both algorithms and tested them experimentally on a variety of test collections, which were chosen to exhibit the whole spectrum of spelling errors as they occur in practice (human-made, OCR-induced, garbage). Our largest lexicon, containing roughly 25 million words, can be processed in half an hour on a single machine. The accuracies we obtain range from 88 - 95. We show that previous approaches, if directly applied to our problem, are either significantly slower or significantly less accurate or both. Our spelling variants clustering problem arises naturally in the context of search engine spelling correction of the following kind: For a given query, return not only documents matching the query words exactly but also those matching their spelling variants. This is inverse to the well-known &#65533;did you mean: ...&#65533; web search engine feature, where the error tolerance is on the side of the query, and not on the side of the documents. We have integrated our algorithms with the CompleteSearch engine, and show that this feature can be achieved without significant blowup in either index size or query processing time.
[108]
A. Chitea, “Efficient Semantic Annotation of the English Wikipedia,” Universität des Saarlandes, Saarbrücken, 2007.
Export
BibTeX
@mastersthesis{Chitea2007, TITLE = {Efficient Semantic Annotation of the English Wikipedia}, AUTHOR = {Chitea, Alexandru}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, }
Endnote
%0 Thesis %A Chitea, Alexandru %Y Bast, Hannah %A referee: Weikum, Gerhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient Semantic Annotation of the English Wikipedia : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-C3F9-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master
[109]
D. Dumitriu, “Graph-based Conservative Surface Reconstruction,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
We propose a new approach for reconstructing a 2-manifold from a point sample in R³. Compared to previous algorithms, our approach is novel in that it throws away geometry information early on in the reconstruction process and mainly operates combinatorially on a graph structure. Furthermore, it is very conservative in creating adjacencies between samples in the vicinity of slivers, still we can prove that the resulting reconstruction faithfully resembles the original 2-manifold. While the theoretical proof requires an extremely high sampling density, our prototype implementation of the approach produces surprisingly good results on typical sample sets.
Export
BibTeX
@mastersthesis{Dumitriu2007, TITLE = {Graph-based Conservative Surface Reconstruction}, AUTHOR = {Dumitriu, Daniel}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-6DF2C53978AEEB3DC12573590049A125-Dumitriu2007}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {We propose a new approach for reconstructing a 2-manifold from a point sample in R&#179;. Compared to previous algorithms, our approach is novel in that it throws away geometry information early on in the reconstruction process and mainly operates combinatorially on a graph structure. Furthermore, it is very conservative in creating adjacencies between samples in the vicinity of slivers, still we can prove that the resulting reconstruction faithfully resembles the original 2-manifold. While the theoretical proof requires an extremely high sampling density, our prototype implementation of the approach produces surprisingly good results on typical sample sets.}, }
Endnote
%0 Thesis %A Dumitriu, Daniel %Y Kutz, Martin %A referee: Funke, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Graph-based Conservative Surface Reconstruction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1DAA-B %F EDOC: 356698 %F OTHER: Local-ID: C12573CC004A8E26-6DF2C53978AEEB3DC12573590049A125-Dumitriu2007 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X We propose a new approach for reconstructing a 2-manifold from a point sample in R&#179;. Compared to previous algorithms, our approach is novel in that it throws away geometry information early on in the reconstruction process and mainly operates combinatorially on a graph structure. Furthermore, it is very conservative in creating adjacencies between samples in the vicinity of slivers, still we can prove that the resulting reconstruction faithfully resembles the original 2-manifold. While the theoretical proof requires an extremely high sampling density, our prototype implementation of the approach produces surprisingly good results on typical sample sets.
[110]
S. Elbassuoni, “Adaptive Personalization of Web Search,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
An often stated problem in the state-of-the-art web search is its lack of user adaptation, as all users are presented with the same search results for a given query string. A user submitting an ambiguous query such as ”java” with a strong interest in traveling might appreciate finding pages related to the Indonesian island Java. However, if the same user searched for programming tutorials a few minutes ago, the situation would be completely different, and call for programming-related results. Furthermore suppose our sample user searches for ”java hashmap”. Again imposing her interest into traveling might this time have the contrary effect and even harm the result quality. Thus the effectiveness of a personalization of web search shows high variance in performance depending on the query, the user and the search context. To this end, carefully choosing the right personalization strategy in a contextsensitive manner is critical for an improvement of search results. In this thesis, we present a general framework that dynamically adapts the query-result ranking to the different information needs in order to improve the search experience for the individual user. We distinguish three different search goals, namely whether the user re-searches known information, delves deeper into a topic she is generally interested in, or satisfies an ad-hoc information need. We take an implicit relevance feedback approach that makes use of the user’s web interactions, however, vary what constitutes the examples of relevant and irrelevant information according to the user’s search mode. We show that incorporating user behavior data can significantly improve the ordering of top results in a real web search setting.
Export
BibTeX
@mastersthesis{Elbassuoni-Master, TITLE = {Adaptive Personalization of Web Search}, AUTHOR = {Elbassuoni, Shady}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-6252DBEA76F19925C125730E0040A735-Elbassuoni-Master}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {An often stated problem in the state-of-the-art web search is its lack of user adaptation, as all users are presented with the same search results for a given query string. A user submitting an ambiguous query such as {\textquotedblright}java{\textquotedblright} with a strong interest in traveling might appreciate finding pages related to the Indonesian island Java. However, if the same user searched for programming tutorials a few minutes ago, the situation would be completely different, and call for programming-related results. Furthermore suppose our sample user searches for {\textquotedblright}java hashmap{\textquotedblright}. Again imposing her interest into traveling might this time have the contrary effect and even harm the result quality. Thus the effectiveness of a personalization of web search shows high variance in performance depending on the query, the user and the search context. To this end, carefully choosing the right personalization strategy in a contextsensitive manner is critical for an improvement of search results. In this thesis, we present a general framework that dynamically adapts the query-result ranking to the different information needs in order to improve the search experience for the individual user. We distinguish three different search goals, namely whether the user re-searches known information, delves deeper into a topic she is generally interested in, or satisfies an ad-hoc information need. We take an implicit relevance feedback approach that makes use of the user{\textquoteright}s web interactions, however, vary what constitutes the examples of relevant and irrelevant information according to the user{\textquoteright}s search mode. We show that incorporating user behavior data can significantly improve the ordering of top results in a real web search setting.}, }
Endnote
%0 Thesis %A Elbassuoni, Shady %Y Weikum, Gerhard %A referee: Hermanns, Holger %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Adaptive Personalization of Web Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1D9F-5 %F EDOC: 356495 %F OTHER: Local-ID: C12573CC004A8E26-6252DBEA76F19925C125730E0040A735-Elbassuoni-Master %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X An often stated problem in the state-of-the-art web search is its lack of user adaptation, as all users are presented with the same search results for a given query string. A user submitting an ambiguous query such as &#8221;java&#8221; with a strong interest in traveling might appreciate finding pages related to the Indonesian island Java. However, if the same user searched for programming tutorials a few minutes ago, the situation would be completely different, and call for programming-related results. Furthermore suppose our sample user searches for &#8221;java hashmap&#8221;. Again imposing her interest into traveling might this time have the contrary effect and even harm the result quality. Thus the effectiveness of a personalization of web search shows high variance in performance depending on the query, the user and the search context. To this end, carefully choosing the right personalization strategy in a contextsensitive manner is critical for an improvement of search results. In this thesis, we present a general framework that dynamically adapts the query-result ranking to the different information needs in order to improve the search experience for the individual user. We distinguish three different search goals, namely whether the user re-searches known information, delves deeper into a topic she is generally interested in, or satisfies an ad-hoc information need. We take an implicit relevance feedback approach that makes use of the user&#8217;s web interactions, however, vary what constitutes the examples of relevant and irrelevant information according to the user&#8217;s search mode. We show that incorporating user behavior data can significantly improve the ordering of top results in a real web search setting.
[111]
P. Emeliyanenko, “Visualization of Points and Segments of Real Algebraic Plane Curves,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
This thesis presents an exact and complete approach for visualization of segments and points of real plane algebraic curves given in implicit form $f(x,y) = 0$. A curve segment is a distinct curve branch consisting of regular points only. Visualization of algebraic curves having self-intersection and isolated points constitutes the main challenge. Visualization of curve segments involves even more difficulties since here we are faced with a problem of discriminating different curve branches, which can pass arbitrary close to each other. Our approach is robust and efficient (as shown by our benchmarks), it combines the advantages both of curve tracking and space subdivision methods and is able to correctly rasterize segments of arbitrary-degree algebraic curves using double, multi-precision or exact rational arithmetic.
Export
BibTeX
@mastersthesis{Emel2007, TITLE = {Visualization of Points and Segments of Real Algebraic Plane Curves}, AUTHOR = {Emeliyanenko, Pavel}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-FB0C2A279BD897D1C12572900046C0A0-Emel2007}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {This thesis presents an exact and complete approach for visualization of segments and points of real plane algebraic curves given in implicit form $f(x,y) = 0$. A curve segment is a distinct curve branch consisting of regular points only. Visualization of algebraic curves having self-intersection and isolated points constitutes the main challenge. Visualization of curve segments involves even more difficulties since here we are faced with a problem of discriminating different curve branches, which can pass arbitrary close to each other. Our approach is robust and efficient (as shown by our benchmarks), it combines the advantages both of curve tracking and space subdivision methods and is able to correctly rasterize segments of arbitrary-degree algebraic curves using double, multi-precision or exact rational arithmetic.}, }
Endnote
%0 Thesis %A Emeliyanenko, Pavel %A referee: Wolpert, Nicola %Y Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Visualization of Points and Segments of Real Algebraic Plane Curves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1DC6-B %F EDOC: 356729 %F OTHER: Local-ID: C12573CC004A8E26-FB0C2A279BD897D1C12572900046C0A0-Emel2007 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X This thesis presents an exact and complete approach for visualization of segments and points of real plane algebraic curves given in implicit form $f(x,y) = 0$. A curve segment is a distinct curve branch consisting of regular points only. Visualization of algebraic curves having self-intersection and isolated points constitutes the main challenge. Visualization of curve segments involves even more difficulties since here we are faced with a problem of discriminating different curve branches, which can pass arbitrary close to each other. Our approach is robust and efficient (as shown by our benchmarks), it combines the advantages both of curve tracking and space subdivision methods and is able to correctly rasterize segments of arbitrary-degree algebraic curves using double, multi-precision or exact rational arithmetic.
[112]
A. Fietzke, “Labelled Splitting,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
Saturation-based theorem provers are typically based on a calculus consisting of inference and reduction rules that operate on sets of clauses. While inference rules produce new clauses, reduction rules allow the removal or simplification of redundant clauses and are an essential ingredient for efficient implementations. The power of reduction rules can be further amplified by the use of the splitting rule, which is based on explicit case analysis on variable- disjoint components of a clause. In this thesis, I give a formalization of splitting and backtracking for first-order logic using a labelling scheme that annotates clauses and clause sets with additional information, and I present soundness and completeness results for the corresponding calculus. The backtracking process as formalized here generalizes optimizations that are currently being used, and I present the results of integrating the improved backtracking into SPASS.
Export
BibTeX
@mastersthesis{Fietzke2007, TITLE = {Labelled Splitting}, AUTHOR = {Fietzke, Arnaud}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-FBD818168DB4D640C12573D400507071-Fietzke2007}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Saturation-based theorem provers are typically based on a calculus consisting of inference and reduction rules that operate on sets of clauses. While inference rules produce new clauses, reduction rules allow the removal or simplification of redundant clauses and are an essential ingredient for efficient implementations. The power of reduction rules can be further amplified by the use of the splitting rule, which is based on explicit case analysis on variable- disjoint components of a clause. In this thesis, I give a formalization of splitting and backtracking for first-order logic using a labelling scheme that annotates clauses and clause sets with additional information, and I present soundness and completeness results for the corresponding calculus. The backtracking process as formalized here generalizes optimizations that are currently being used, and I present the results of integrating the improved backtracking into SPASS.}, }
Endnote
%0 Thesis %A Fietzke, Arnaud %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society %T Labelled Splitting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1D84-0 %F EDOC: 356451 %F OTHER: Local-ID: C12573CC004A8E26-FBD818168DB4D640C12573D400507071-Fietzke2007 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %P III, 35 S. %V master %9 master %X Saturation-based theorem provers are typically based on a calculus consisting of inference and reduction rules that operate on sets of clauses. While inference rules produce new clauses, reduction rules allow the removal or simplification of redundant clauses and are an essential ingredient for efficient implementations. The power of reduction rules can be further amplified by the use of the splitting rule, which is based on explicit case analysis on variable- disjoint components of a clause. In this thesis, I give a formalization of splitting and backtracking for first-order logic using a labelling scheme that annotates clauses and clause sets with additional information, and I present soundness and completeness results for the corresponding calculus. The backtracking process as formalized here generalizes optimizations that are currently being used, and I present the results of integrating the improved backtracking into SPASS.
[113]
F. Horazal, “Towards a Natural Representation of Mathematics in Proof Assistants,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
In this thesis we investigate the proof assistant Scunak in order to explore the relationship between informal mathematical texts and their Scunak counterparts. The investigation is based on a case study in which we have formalized parts of an introductory book on real analysis. Based on this case study, we illustrate significant aspects of the formal representation of mathematics in Scunak. In particular, we present the formal proof of the example lim(1/n) = 0. Moreover, we present a comparison of Scunak with two well-known systems for formalizing mathematics, the Mizar System and Isabelle/HOL. We have proved the example lim(1/n) = 0 in Mizar and Isabelle/HOL as well and we relate certain features of formal mathematics in Mizar and Isabelle/HOL to corresponding features of the Scunak type theory in light of this example.
Export
BibTeX
@mastersthesis{Horozal2007, TITLE = {Towards a Natural Representation of Mathematics in Proof Assistants}, AUTHOR = {Horazal, Fulya}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {In this thesis we investigate the proof assistant Scunak in order to explore the relationship between informal mathematical texts and their Scunak counterparts. The investigation is based on a case study in which we have formalized parts of an introductory book on real analysis. Based on this case study, we illustrate significant aspects of the formal representation of mathematics in Scunak. In particular, we present the formal proof of the example lim(1/n) = 0. Moreover, we present a comparison of Scunak with two well-known systems for formalizing mathematics, the Mizar System and Isabelle/HOL. We have proved the example lim(1/n) = 0 in Mizar and Isabelle/HOL as well and we relate certain features of formal mathematics in Mizar and Isabelle/HOL to corresponding features of the Scunak type theory in light of this example.}, }
Endnote
%0 Thesis %A Horazal, Fulya %Y Siekmann, J&#246;rg %A referee: Smolka, Gert %A referee: Brown, Chad %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Towards a Natural Representation of Mathematics in Proof Assistants : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-CF7A-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X In this thesis we investigate the proof assistant Scunak in order to explore the relationship between informal mathematical texts and their Scunak counterparts. The investigation is based on a case study in which we have formalized parts of an introductory book on real analysis. Based on this case study, we illustrate significant aspects of the formal representation of mathematics in Scunak. In particular, we present the formal proof of the example lim(1/n) = 0. Moreover, we present a comparison of Scunak with two well-known systems for formalizing mathematics, the Mizar System and Isabelle/HOL. We have proved the example lim(1/n) = 0 in Mizar and Isabelle/HOL as well and we relate certain features of formal mathematics in Mizar and Isabelle/HOL to corresponding features of the Scunak type theory in light of this example.
[114]
C. Hritcu, “Step-indexed Semantic Model of Types for the Functional Object Calculus,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
Step-indexed semantic models of types were proposed as an alternative to the purely syntactic proofs of type safety using subject-reduction. This thesis introduces a step-indexed model for the functional object calculus, and uses it to prove the soundness of an expressive type system with object types, subtyping, recursive and bounded quantified types.
Export
BibTeX
@mastersthesis{Hritcu2007, TITLE = {Step-indexed Semantic Model of Types for the Functional Object Calculus}, AUTHOR = {Hritcu, Catalin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Step-indexed semantic models of types were proposed as an alternative to the purely syntactic proofs of type safety using subject-reduction. This thesis introduces a step-indexed model for the functional object calculus, and uses it to prove the soundness of an expressive type system with object types, subtyping, recursive and bounded quantified types.}, }
Endnote
%0 Thesis %A Hritcu, Catalin %Y Smolka, Gert %A referee: Hermanns, Holger %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Step-indexed Semantic Model of Types for the Functional Object Calculus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D108-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X Step-indexed semantic models of types were proposed as an alternative to the purely syntactic proofs of type safety using subject-reduction. This thesis introduces a step-indexed model for the functional object calculus, and uses it to prove the soundness of an expressive type system with object types, subtyping, recursive and bounded quantified types.
[115]
L. Machablishvili, “Computing k-hop Broadcast Trees Exactly,” Universität des Saarlandes, Saarbrücken, 2007.
Export
BibTeX
@mastersthesis{Machablishvili2007, TITLE = {Computing k-hop Broadcast Trees Exactly}, AUTHOR = {Machablishvili, Levan}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, }
Endnote
%0 Thesis %A Machablishvili, Levan %Y Funke, Stefan %A referee: Hermanns, Holger %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Computing k-hop Broadcast Trees Exactly : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D10E-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master
[116]
M. A. Maksoud, “Generating Code from Abstract VHDL Models,” Universität des Saarlandes, Saarbrücken, 2007.
Export
BibTeX
@mastersthesis{Maksoud2007, TITLE = {Generating Code from Abstract {VHDL} Models}, AUTHOR = {Maksoud, Mohamed Abdel}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, }
Endnote
%0 Thesis %A Maksoud, Mohamed Abdel %Y Wilhelm, Reinhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Generating Code from Abstract VHDL Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D127-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master
[117]
Y. Mileva, “Invariance with Optic Flow,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
Variational methods currently belong to the most accurate techniques for the computation of the displacement field between the frames of an image sequence. Accuracy and time performance improvements of these methods are achieved every year. Most of the effort is directed towards finding better data and smoothness terms for the energy functional. Usually people are working mainly with black-and-white image sequences. In this thesis we will consider only colour images, as we believe that the colour itself carries much more information than the grey value and can help us for the better estimation of the optic flow. So far most of the research done in optic flow computation does not consider the presence of realistic illumination changes in the image sequences. One of the main goals of this thesis is to find new constancy assumptions for the data term, which overcome the problems of severe illumination changes. So far no research has been also done on combining variational methods with statistical moments for the purpose of optic flow computation. The second goal of this thesis is to investigate how and to what extend the optic flow methods can benefit from the rotational invariant moments. We will introduce a new variational methods framework that can combine all of the above mentioned new assumptions into a successful optic flow computation technique.
Export
BibTeX
@mastersthesis{Mileva2007, TITLE = {Invariance with Optic Flow}, AUTHOR = {Mileva, Yana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Variational methods currently belong to the most accurate techniques for the computation of the displacement field between the frames of an image sequence. Accuracy and time performance improvements of these methods are achieved every year. Most of the effort is directed towards finding better data and smoothness terms for the energy functional. Usually people are working mainly with black-and-white image sequences. In this thesis we will consider only colour images, as we believe that the colour itself carries much more information than the grey value and can help us for the better estimation of the optic flow. So far most of the research done in optic flow computation does not consider the presence of realistic illumination changes in the image sequences. One of the main goals of this thesis is to find new constancy assumptions for the data term, which overcome the problems of severe illumination changes. So far no research has been also done on combining variational methods with statistical moments for the purpose of optic flow computation. The second goal of this thesis is to investigate how and to what extend the optic flow methods can benefit from the rotational invariant moments. We will introduce a new variational methods framework that can combine all of the above mentioned new assumptions into a successful optic flow computation technique.}, }
Endnote
%0 Thesis %A Mileva, Yana %Y Weikert, Joachim %A referee: Wilhelm, Reinhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Invariance with Optic Flow : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D19C-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X Variational methods currently belong to the most accurate techniques for the computation of the displacement field between the frames of an image sequence. Accuracy and time performance improvements of these methods are achieved every year. Most of the effort is directed towards finding better data and smoothness terms for the energy functional. Usually people are working mainly with black-and-white image sequences. In this thesis we will consider only colour images, as we believe that the colour itself carries much more information than the grey value and can help us for the better estimation of the optic flow. So far most of the research done in optic flow computation does not consider the presence of realistic illumination changes in the image sequences. One of the main goals of this thesis is to find new constancy assumptions for the data term, which overcome the problems of severe illumination changes. So far no research has been also done on combining variational methods with statistical moments for the purpose of optic flow computation. The second goal of this thesis is to investigate how and to what extend the optic flow methods can benefit from the rotational invariant moments. We will introduce a new variational methods framework that can combine all of the above mentioned new assumptions into a successful optic flow computation technique.
[118]
S. Nenova, “Extraction of Attack Signatures,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
With the advance of technology, the need for fast reaction to remote attacks gains in importance. A common practice to help detect malicious activity is to install an Intrusion Detection System. Intrusion detection systems are equipped with a set of signatures�descriptions of known intrusion attempts. They monitor traffic and use the signatures to detect intrusion attempts. To date, attack signatures are still mostly derived manually. However, to ensure the security of computer systems and data, the speed and quality of signature generation has to be improved. To help achieve the task, we propose an approach for automatic extraction of attack signatures. In contrast to the majority of the existing research in the area, we do not confine our approach to a particular type of attack. In particular, we are the first to try signature extraction for attacks resulting from misconfigured security policies. Whereas the majority of existing approaches rely on statistical methods and require many attack instances in order to launch the signature generation mechanism, we use experimentation and need only a single attack instance. For experimentation, we combine an existing framework for capture and replay of system calls with an appropriate minimization algorithm. We propose three minimization algorithms: Delta Debugging, Binary Debugging and Consecutive Binary Debugging. We evaluate the performance of the different algorithms and test our approach with an example program. In all test cases, our application successfully extracts the attack signature. Our current results suggest that this is a promising approach that can help us defend better and faster against unknown attacks.
Export
BibTeX
@mastersthesis{Nenova2007, TITLE = {Extraction of Attack Signatures}, AUTHOR = {Nenova, Stefana}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {With the advance of technology, the need for fast reaction to remote attacks gains in importance. A common practice to help detect malicious activity is to install an Intrusion Detection System. Intrusion detection systems are equipped with a set of signatures{\diamond}descriptions of known intrusion attempts. They monitor traffic and use the signatures to detect intrusion attempts. To date, attack signatures are still mostly derived manually. However, to ensure the security of computer systems and data, the speed and quality of signature generation has to be improved. To help achieve the task, we propose an approach for automatic extraction of attack signatures. In contrast to the majority of the existing research in the area, we do not confine our approach to a particular type of attack. In particular, we are the first to try signature extraction for attacks resulting from misconfigured security policies. Whereas the majority of existing approaches rely on statistical methods and require many attack instances in order to launch the signature generation mechanism, we use experimentation and need only a single attack instance. For experimentation, we combine an existing framework for capture and replay of system calls with an appropriate minimization algorithm. We propose three minimization algorithms: Delta Debugging, Binary Debugging and Consecutive Binary Debugging. We evaluate the performance of the different algorithms and test our approach with an example program. In all test cases, our application successfully extracts the attack signature. Our current results suggest that this is a promising approach that can help us defend better and faster against unknown attacks.}, }
Endnote
%0 Thesis %A Nenova, Stefana %Y Zeller, Andreas %A referee: Wilhelm, Reinhard %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Extraction of Attack Signatures : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D1A0-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X With the advance of technology, the need for fast reaction to remote attacks gains in importance. A common practice to help detect malicious activity is to install an Intrusion Detection System. Intrusion detection systems are equipped with a set of signatures&#65533;descriptions of known intrusion attempts. They monitor traffic and use the signatures to detect intrusion attempts. To date, attack signatures are still mostly derived manually. However, to ensure the security of computer systems and data, the speed and quality of signature generation has to be improved. To help achieve the task, we propose an approach for automatic extraction of attack signatures. In contrast to the majority of the existing research in the area, we do not confine our approach to a particular type of attack. In particular, we are the first to try signature extraction for attacks resulting from misconfigured security policies. Whereas the majority of existing approaches rely on statistical methods and require many attack instances in order to launch the signature generation mechanism, we use experimentation and need only a single attack instance. For experimentation, we combine an existing framework for capture and replay of system calls with an appropriate minimization algorithm. We propose three minimization algorithms: Delta Debugging, Binary Debugging and Consecutive Binary Debugging. We evaluate the performance of the different algorithms and test our approach with an example program. In all test cases, our application successfully extracts the attack signature. Our current results suggest that this is a promising approach that can help us defend better and faster against unknown attacks.
[119]
G. Pandey, “Retrieval Model Enhancement by Implicit Feedback from Query Logs,” Universität des Saarlandes, Saarbrücken, 2007.
Export
BibTeX
@mastersthesis{PandeyMaster08, TITLE = {Retrieval Model Enhancement by Implicit Feedback from Query Logs}, AUTHOR = {Pandey, Gaurav}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125756E0038A185-4BA81B4BEC36919EC1257590002B50A7-PandeyMaster08}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, }
Endnote
%0 Thesis %A Pandey, Gaurav %Y Weikum, Gerhard %A referee: Bast, Hannah %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Retrieval Model Enhancement by Implicit Feedback from Query Logs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1DA7-2 %F EDOC: 428305 %F OTHER: Local-ID: C125756E0038A185-4BA81B4BEC36919EC1257590002B50A7-PandeyMaster08 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %P X, 51 S. %V master %9 master
[120]
S. Solomon, “Evaluation of Relevance Feedback Algorithms for XML Retrieval,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
Information retrieval and feedback in {XML} are rather new fields for researchers; natural questions arise, such as: how good are the feedback algorithms in {XML IR}? Can they be evaluated with standard evaluation tools? Even though some evaluation methods have been proposed in the literature, it is still not clear yet which of them are applicable in the context of {XML IR}, and which metrics they can be combined with to assess the quality of {XML} retrieval algorithms that use feedback. We propose a solution for fairly evaluating the performance of the {XML} search engines that use feedback for improving the query results. Compared to previous approaches, we aim at removing the effect of the results for which the system has knowledge about their the relevance, and at measuring the improvement on unseen relevant elements. We implemented our proposed evaluation methodologies by extending a standard evaluation tool with a module capable of assessing feedback algorithms for a specific set of metrics. We performed multiple tests on runs from both {INEX} 2005 and {INEX} 2006, covering two different {XML} document collections. The performance of the assessed feedback algorithms did not reach the theoretical optimal values either for the proposed evaluation methodologies, or for the used metrics. The analysis of the results shows that, although the six evaluation techniques provide good improvement figures, none of them can be declared the absolute winner. Despite the lack of a definitive conclusion, our findings provide a better understanding on the quality of feedback algorithms.
Export
BibTeX
@mastersthesis{Solomon2007, TITLE = {Evaluation of Relevance Feedback Algorithms for {XML} Retrieval}, AUTHOR = {Solomon, Silvana}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-7696201B4CA7C699C125730800414211-Solomon2007}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Information retrieval and feedback in {XML} are rather new fields for researchers; natural questions arise, such as: how good are the feedback algorithms in {XML IR}? Can they be evaluated with standard evaluation tools? Even though some evaluation methods have been proposed in the literature, it is still not clear yet which of them are applicable in the context of {XML IR}, and which metrics they can be combined with to assess the quality of {XML} retrieval algorithms that use feedback. We propose a solution for fairly evaluating the performance of the {XML} search engines that use feedback for improving the query results. Compared to previous approaches, we aim at removing the effect of the results for which the system has knowledge about their the relevance, and at measuring the improvement on unseen relevant elements. We implemented our proposed evaluation methodologies by extending a standard evaluation tool with a module capable of assessing feedback algorithms for a specific set of metrics. We performed multiple tests on runs from both {INEX} 2005 and {INEX} 2006, covering two different {XML} document collections. The performance of the assessed feedback algorithms did not reach the theoretical optimal values either for the proposed evaluation methodologies, or for the used metrics. The analysis of the results shows that, although the six evaluation techniques provide good improvement figures, none of them can be declared the absolute winner. Despite the lack of a definitive conclusion, our findings provide a better understanding on the quality of feedback algorithms.}, }
Endnote
%0 Thesis %A Solomon, Silvana %Y Weikum, Gerhard %A referee: Schenkel, Ralf %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Evaluation of Relevance Feedback Algorithms for XML Retrieval : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1D94-C %F EDOC: 356461 %F OTHER: Local-ID: C12573CC004A8E26-7696201B4CA7C699C125730800414211-Solomon2007 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X Information retrieval and feedback in {XML} are rather new fields for researchers; natural questions arise, such as: how good are the feedback algorithms in {XML IR}? Can they be evaluated with standard evaluation tools? Even though some evaluation methods have been proposed in the literature, it is still not clear yet which of them are applicable in the context of {XML IR}, and which metrics they can be combined with to assess the quality of {XML} retrieval algorithms that use feedback. We propose a solution for fairly evaluating the performance of the {XML} search engines that use feedback for improving the query results. Compared to previous approaches, we aim at removing the effect of the results for which the system has knowledge about their the relevance, and at measuring the improvement on unseen relevant elements. We implemented our proposed evaluation methodologies by extending a standard evaluation tool with a module capable of assessing feedback algorithms for a specific set of metrics. We performed multiple tests on runs from both {INEX} 2005 and {INEX} 2006, covering two different {XML} document collections. The performance of the assessed feedback algorithms did not reach the theoretical optimal values either for the proposed evaluation methodologies, or for the used metrics. The analysis of the results shows that, although the six evaluation techniques provide good improvement figures, none of them can be declared the absolute winner. Despite the lack of a definitive conclusion, our findings provide a better understanding on the quality of feedback algorithms.
[121]
M. Strauss, “Realtime Generation of Multimodal Affective Sports Commentary for Embodied Agents,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
Autonomous, graphically embodied agents are a versatile platform for information presentation and user interaction. This thesis presents ERIC, a homogeneous agent framework that can be configured to provide real-time running commentary on a dynamic environment of many and frequent events. We have focused on knowledge reasoning with a world model, generating and expressing affect, and generating coherent natural language, synchronised with nonverbal modalities. The graphical and TTS output of the agent is provided by commercial systems. ERIC is currently implemented to commentate a simulated horse race and a multiplayer tank combat game. With minimal modification the system is configurable to provide commentary in any continuous dynamically changing environment; for example, it could commentate sports matches and computer games, or play the role of "tourist guide" during a self-guided tour of a city. An elaborate world model is deduced from limited input by an expert system implemented as rules in Jess. Natural language is generated using template-based NLG. Discourse coherence is maintained by requiring semantic relations between the forwardlooking and backward-looking centers of successive utterances. The agent uses a set of causal and belief relations to assign appraisals of emotion-eliciting conditions to facts in the world model based on goals and desires. These appraisals are used to generate an affective state according to the OCC cognitive model of emotions; the agent�s affect is expressed via his lexical choice, gestures and facial expressions. ERIC was designed to be domain-independent, homogeneous, behaviourally complex, reactive and affective. Domain-indepence was evaluated by comparing the effort required to implement the ERIC system with the effort required to re-implement the framework for another domain. Complexity, reactivity and affectivity were assessed by independent experts, whose reviews are presented.
Export
BibTeX
@mastersthesis{Strauss2007, TITLE = {Realtime Generation of Multimodal Affective Sports Commentary for Embodied Agents}, AUTHOR = {Strauss, Martin}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {Autonomous, graphically embodied agents are a versatile platform for information presentation and user interaction. This thesis presents ERIC, a homogeneous agent framework that can be configured to provide real-time running commentary on a dynamic environment of many and frequent events. We have focused on knowledge reasoning with a world model, generating and expressing affect, and generating coherent natural language, synchronised with nonverbal modalities. The graphical and TTS output of the agent is provided by commercial systems. ERIC is currently implemented to commentate a simulated horse race and a multiplayer tank combat game. With minimal modification the system is configurable to provide commentary in any continuous dynamically changing environment; for example, it could commentate sports matches and computer games, or play the role of "tourist guide" during a self-guided tour of a city. An elaborate world model is deduced from limited input by an expert system implemented as rules in Jess. Natural language is generated using template-based NLG. Discourse coherence is maintained by requiring semantic relations between the forwardlooking and backward-looking centers of successive utterances. The agent uses a set of causal and belief relations to assign appraisals of emotion-eliciting conditions to facts in the world model based on goals and desires. These appraisals are used to generate an affective state according to the OCC cognitive model of emotions; the agent{\diamond}s affect is expressed via his lexical choice, gestures and facial expressions. ERIC was designed to be domain-independent, homogeneous, behaviourally complex, reactive and affective. Domain-indepence was evaluated by comparing the effort required to implement the ERIC system with the effort required to re-implement the framework for another domain. Complexity, reactivity and affectivity were assessed by independent experts, whose reviews are presented.}, }
Endnote
%0 Thesis %A Strauss, Martin %Y Wahlster, Wolfgang %A referee: Kipp, Michael %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Realtime Generation of Multimodal Affective Sports Commentary for Embodied Agents : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D1A5-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X Autonomous, graphically embodied agents are a versatile platform for information presentation and user interaction. This thesis presents ERIC, a homogeneous agent framework that can be configured to provide real-time running commentary on a dynamic environment of many and frequent events. We have focused on knowledge reasoning with a world model, generating and expressing affect, and generating coherent natural language, synchronised with nonverbal modalities. The graphical and TTS output of the agent is provided by commercial systems. ERIC is currently implemented to commentate a simulated horse race and a multiplayer tank combat game. With minimal modification the system is configurable to provide commentary in any continuous dynamically changing environment; for example, it could commentate sports matches and computer games, or play the role of "tourist guide" during a self-guided tour of a city. An elaborate world model is deduced from limited input by an expert system implemented as rules in Jess. Natural language is generated using template-based NLG. Discourse coherence is maintained by requiring semantic relations between the forwardlooking and backward-looking centers of successive utterances. The agent uses a set of causal and belief relations to assign appraisals of emotion-eliciting conditions to facts in the world model based on goals and desires. These appraisals are used to generate an affective state according to the OCC cognitive model of emotions; the agent&#65533;s affect is expressed via his lexical choice, gestures and facial expressions. ERIC was designed to be domain-independent, homogeneous, behaviourally complex, reactive and affective. Domain-indepence was evaluated by comparing the effort required to implement the ERIC system with the effort required to re-implement the framework for another domain. Complexity, reactivity and affectivity were assessed by independent experts, whose reviews are presented.
[122]
P. Wischnewski, “Contextual Rewriting in SPASS,” Universität des Saarlandes, Saarbrücken, 2007.
Abstract
First-order theorem proving with equality is undecidable, in general. However, it is semi-decidable in the sense that it is refutationally complete. The basis for a (semi)-decision procedure for first-order clauses with equality is a calculus composed of inference and reduction rules. The inference rules of the calculus generate new clauses whereas the reduction rules delete clauses or transform them into simpler ones. If, in particular, strong reduction rules are available, decidability of certain subclasses of first-order logic can be shown. Hence, sophisticated reductions are essential for progress in automated theorem proving. In this thesis we consider the superposition calculus and in particular the sophisticated reduction rule Contextual Rewriting. However, it is in general undecidable whether contextual rewriting can be applied. Therefore, to make the rule applicable in practice, it has to be further refined. In this work we develop an instance of contextual rewriting which effectively performs contextual rewriting and we implement this in the theorem prover Spass.
Export
BibTeX
@mastersthesis{Wischnewski2007, TITLE = {Contextual Rewriting in {SPASS}}, AUTHOR = {Wischnewski, Patrick}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-4510F08E286235C1C12573D4004FF8F5-Wischnewski2007}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {First-order theorem proving with equality is undecidable, in general. However, it is semi-decidable in the sense that it is refutationally complete. The basis for a (semi)-decision procedure for first-order clauses with equality is a calculus composed of inference and reduction rules. The inference rules of the calculus generate new clauses whereas the reduction rules delete clauses or transform them into simpler ones. If, in particular, strong reduction rules are available, decidability of certain subclasses of first-order logic can be shown. Hence, sophisticated reductions are essential for progress in automated theorem proving. In this thesis we consider the superposition calculus and in particular the sophisticated reduction rule Contextual Rewriting. However, it is in general undecidable whether contextual rewriting can be applied. Therefore, to make the rule applicable in practice, it has to be further refined. In this work we develop an instance of contextual rewriting which effectively performs contextual rewriting and we implement this in the theorem prover Spass.}, }
Endnote
%0 Thesis %A Wischnewski, Patrick %+ Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society %T Contextual Rewriting in SPASS : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1D89-6 %F EDOC: 356454 %F OTHER: Local-ID: C12573CC004A8E26-4510F08E286235C1C12573D4004FF8F5-Wischnewski2007 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2007 %V master %9 master %X First-order theorem proving with equality is undecidable, in general. However, it is semi-decidable in the sense that it is refutationally complete. The basis for a (semi)-decision procedure for first-order clauses with equality is a calculus composed of inference and reduction rules. The inference rules of the calculus generate new clauses whereas the reduction rules delete clauses or transform them into simpler ones. If, in particular, strong reduction rules are available, decidability of certain subclasses of first-order logic can be shown. Hence, sophisticated reductions are essential for progress in automated theorem proving. In this thesis we consider the superposition calculus and in particular the sophisticated reduction rule Contextual Rewriting. However, it is in general undecidable whether contextual rewriting can be applied. Therefore, to make the rule applicable in practice, it has to be further refined. In this work we develop an instance of contextual rewriting which effectively performs contextual rewriting and we implement this in the theorem prover Spass.
2006
[123]
L. Antova, “Efficient Representation and Processing of Incomplete Information,” Universität des Saarlandes, Saarbrücken, 2006.
Abstract
Database systems often have to deal with incomplete information as the world they model is not always complete. This is a frequent case in data integration applications, scientific databases, or in scenarios where information is manually entered and constains errors or ambiguity. In the las couple of decades different formalisms have been proposed for representing incomplete information. These include, among other, the so called realtions with or-sets, tables with variables (v-tables) and conditional tables (c-tables). However, none of the current approaches for representing incomple information has satisfied the requirements for a powerful and effcient data management system, which is the reason why none has found application in practice. All models generally suffer from at least one of two weaknesses. Either they are not strong enough for representing results of simple queries, as is the case for v-tables and realtions with or-sets, or the handling and processing of the data, e.g. for query evaluation, is intractable (as is the case for c-tables). In this thesis, e present a decomposition-based approach to addressing the problem of incompletely specified databases. we introduce world-set decompositions (WSDs), a space-efficient formalism for repreenting any finite set of possible worlds over relational algebra queries on WSDs. For each relational algebra operation we present an algorithm operting on WSDs. We also address the problem of data cleaning in the context of world set decompositions. We present a modified version of the existing Chase algorithm, which we use to remove inconsistent worlds in an incompletely specified database. We evaluate our techniques in a large census data scenario with data originating from the 1990 USA census and we show that data processing on WSDs is both scalable and efficient.
Export
BibTeX
@mastersthesis{Antova2006, TITLE = {Efficient Representation and Processing of Incomplete Information}, AUTHOR = {Antova, Lyublena}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Database systems often have to deal with incomplete information as the world they model is not always complete. This is a frequent case in data integration applications, scientific databases, or in scenarios where information is manually entered and constains errors or ambiguity. In the las couple of decades different formalisms have been proposed for representing incomplete information. These include, among other, the so called realtions with or-sets, tables with variables (v-tables) and conditional tables (c-tables). However, none of the current approaches for representing incomple information has satisfied the requirements for a powerful and effcient data management system, which is the reason why none has found application in practice. All models generally suffer from at least one of two weaknesses. Either they are not strong enough for representing results of simple queries, as is the case for v-tables and realtions with or-sets, or the handling and processing of the data, e.g. for query evaluation, is intractable (as is the case for c-tables). In this thesis, e present a decomposition-based approach to addressing the problem of incompletely specified databases. we introduce world-set decompositions (WSDs), a space-efficient formalism for repreenting any finite set of possible worlds over relational algebra queries on WSDs. For each relational algebra operation we present an algorithm operting on WSDs. We also address the problem of data cleaning in the context of world set decompositions. We present a modified version of the existing Chase algorithm, which we use to remove inconsistent worlds in an incompletely specified database. We evaluate our techniques in a large census data scenario with data originating from the 1990 USA census and we show that data processing on WSDs is both scalable and efficient.}, }
Endnote
%0 Thesis %A Antova, Lyublena %Y Olteanu, Dan %A referee: Koch, Christoph %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Efficient Representation and Processing of Incomplete Information : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D311-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2006 %V master %9 master %X Database systems often have to deal with incomplete information as the world they model is not always complete. This is a frequent case in data integration applications, scientific databases, or in scenarios where information is manually entered and constains errors or ambiguity. In the las couple of decades different formalisms have been proposed for representing incomplete information. These include, among other, the so called realtions with or-sets, tables with variables (v-tables) and conditional tables (c-tables). However, none of the current approaches for representing incomple information has satisfied the requirements for a powerful and effcient data management system, which is the reason why none has found application in practice. All models generally suffer from at least one of two weaknesses. Either they are not strong enough for representing results of simple queries, as is the case for v-tables and realtions with or-sets, or the handling and processing of the data, e.g. for query evaluation, is intractable (as is the case for c-tables). In this thesis, e present a decomposition-based approach to addressing the problem of incompletely specified databases. we introduce world-set decompositions (WSDs), a space-efficient formalism for repreenting any finite set of possible worlds over relational algebra queries on WSDs. For each relational algebra operation we present an algorithm operting on WSDs. We also address the problem of data cleaning in the context of world set decompositions. We present a modified version of the existing Chase algorithm, which we use to remove inconsistent worlds in an incompletely specified database. We evaluate our techniques in a large census data scenario with data originating from the 1990 USA census and we show that data processing on WSDs is both scalable and efficient.
[124]
Y. Assenov, “Topological Analysis of Biological Networks,” Universität des Saarlandes, Saarbrücken, 2006.
Export
BibTeX
@mastersthesis{Assenov2006a, TITLE = {Topological Analysis of Biological Networks}, AUTHOR = {Assenov, Yassen}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125673F004B2D7B-7C34F302699CE873C12572960035F334-Assenov2006a}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, }
Endnote
%0 Thesis %A Assenov, Yassen %Y Albrecht, Mario %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Topological Analysis of Biological Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2441-9 %F EDOC: 314467 %F OTHER: Local-ID: C125673F004B2D7B-7C34F302699CE873C12572960035F334-Assenov2006a %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2006 %V master %9 master
[125]
M. Demir, “Predicting Component Failures at Early Design Time,” Universität des Saarlandes, Saarbrücken, 2006.
Abstract
For the effective prevention and elemination of defects and failures in a software system, it is important to know which parts of the software are mor likely to contain errors, and therefore, can be considered as "risky". To increase reliability and quality, more effort should be spent in risky components during design, implementation, and testing. Examining the version archive and the code of a large open-source project, we have investigated the relation between the risk of components as measured by post-release failures, and different code structures; such as method calls, variables, exception handling expressions and inheritnace statements. We have analyzed the different types of usage relations between components, and their affects on the failures. We utilized three commonly used statistical techniques to build failure prediction models. As a realistic opponent to our models, we introduced a "simple prediction model" which makes use of the riskiness information from the available components, rather than making random guesses. While the results from the classification experiments supported the use of code structures to predict failur-proneness, our regression analyses showed that the design time decisions also effected component riskiness. Our models were able to make precise predictions, with even only the knowledge of the inheritnace relations. since inheritance relations are defined aerliest at design time; based on the results of this study, we can say that it may be possible to initialize preventive actions against failures even early in the design phase of a project.
Export
BibTeX
@mastersthesis{Demir2006, TITLE = {Predicting Component Failures at Early Design Time}, AUTHOR = {Demir, Melih}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {For the effective prevention and elemination of defects and failures in a software system, it is important to know which parts of the software are mor likely to contain errors, and therefore, can be considered as "risky". To increase reliability and quality, more effort should be spent in risky components during design, implementation, and testing. Examining the version archive and the code of a large open-source project, we have investigated the relation between the risk of components as measured by post-release failures, and different code structures; such as method calls, variables, exception handling expressions and inheritnace statements. We have analyzed the different types of usage relations between components, and their affects on the failures. We utilized three commonly used statistical techniques to build failure prediction models. As a realistic opponent to our models, we introduced a "simple prediction model" which makes use of the riskiness information from the available components, rather than making random guesses. While the results from the classification experiments supported the use of code structures to predict failur-proneness, our regression analyses showed that the design time decisions also effected component riskiness. Our models were able to make precise predictions, with even only the knowledge of the inheritnace relations. since inheritance relations are defined aerliest at design time; based on the results of this study, we can say that it may be possible to initialize preventive actions against failures even early in the design phase of a project.}, }
Endnote
%0 Thesis %A Demir, Melih %Y Zeller, Andreas %A referee: Wilhelm, Reinhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Predicting Component Failures at Early Design Time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D44A-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2006 %V master %9 master %X For the effective prevention and elemination of defects and failures in a software system, it is important to know which parts of the software are mor likely to contain errors, and therefore, can be considered as "risky". To increase reliability and quality, more effort should be spent in risky components during design, implementation, and testing. Examining the version archive and the code of a large open-source project, we have investigated the relation between the risk of components as measured by post-release failures, and different code structures; such as method calls, variables, exception handling expressions and inheritnace statements. We have analyzed the different types of usage relations between components, and their affects on the failures. We utilized three commonly used statistical techniques to build failure prediction models. As a realistic opponent to our models, we introduced a "simple prediction model" which makes use of the riskiness information from the available components, rather than making random guesses. While the results from the classification experiments supported the use of code structures to predict failur-proneness, our regression analyses showed that the design time decisions also effected component riskiness. Our models were able to make precise predictions, with even only the knowledge of the inheritnace relations. since inheritance relations are defined aerliest at design time; based on the results of this study, we can say that it may be possible to initialize preventive actions against failures even early in the design phase of a project.
[126]
R. Dimitrova, “Model Checking with Abstraction Refinement for Well-structured Systems,” Universität des Saarlandes, Saarbrücken, 2006.
Export
BibTeX
@mastersthesis{DimitrovaPhd2006, TITLE = {Model Checking with Abstraction Refinement for Well-structured Systems}, AUTHOR = {Dimitrova, Rayna}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-13339}, LOCALID = {Local-ID: C1256104005ECAFC-E98933E3DE9BBA00C12572350035E8C3-Dimitrova2006}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, }
Endnote
%0 Thesis %A Dimitrova, Rayna %Y Podelski, Andreas %A referee: Finkbeiner, Bernd %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society External Organizations %T Model Checking with Abstraction Refinement for Well-structured Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-218F-8 %F EDOC: 314388 %F OTHER: Local-ID: C1256104005ECAFC-E98933E3DE9BBA00C12572350035E8C3-Dimitrova2006 %U urn:nbn:de:bsz:291-scidok-13339 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2006 %P 93 p. %V master %9 master %U http://scidok.sulb.uni-saarland.de/volltexte/2007/1333/http://scidok.sulb.uni-saarland.de/doku/urheberrecht.php?la=de
[127]
K. Halachev, “EpiGRAPHregression: A toolkit for (epi-)genomic correlation analysis and prediction of quantitative attributes,” Universität des Saarlandes, Saarbrücken, 2006.
Abstract
Five years ago, the human genome sequence was published, an important milestone towards understanding human biology. However, basic cell processes cannot be explained by the genome sequence alone. Instead, further layers of control such as the epigenome will be important for significant advances towards better understanding of normal and disease-related phenotypes. A new research field in computational biology is currently emerging that is concerned with the analysis of functional information beyond the human genome sequence. Our goal is to provide biologists with means to navigate the large amounts of epigenetic data and with tools to screen these data for biologically interesting associations. We developed a statistical learning methodology that facilitates mapping of epigenetic data against the human genome, identifies areas of over- and underrepresentation, and finds significant correlations with DNA-related attributes. We implemented this methodology in a software toolkit called EpiGRAPHregression. EpiGRAPHregression is a prototype of a genome analysis tool that enables the user to analyze relationships between many attributes, and it provides a quick test whether a newly analyzed attribute can be efficiently predicted from already known attributes. Thereby, EpiGRAPHregression may significantly speed up the analysis of new types of genomic and epigenomic data.
Export
BibTeX
@mastersthesis{HalachevMaster2006, TITLE = {{EpiGRAPHregression}: A toolkit for (epi-)genomic correlation analysis and prediction of quantitative attributes}, AUTHOR = {Halachev, Konstantin}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125673F004B2D7B-56DA5A90E26AE559C12572350051361B-HalachevMaster2006}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Five years ago, the human genome sequence was published, an important milestone towards understanding human biology. However, basic cell processes cannot be explained by the genome sequence alone. Instead, further layers of control such as the epigenome will be important for significant advances towards better understanding of normal and disease-related phenotypes. A new research field in computational biology is currently emerging that is concerned with the analysis of functional information beyond the human genome sequence. Our goal is to provide biologists with means to navigate the large amounts of epigenetic data and with tools to screen these data for biologically interesting associations. We developed a statistical learning methodology that facilitates mapping of epigenetic data against the human genome, identifies areas of over- and underrepresentation, and finds significant correlations with DNA-related attributes. We implemented this methodology in a software toolkit called EpiGRAPHregression. EpiGRAPHregression is a prototype of a genome analysis tool that enables the user to analyze relationships between many attributes, and it provides a quick test whether a newly analyzed attribute can be efficiently predicted from already known attributes. Thereby, EpiGRAPHregression may significantly speed up the analysis of new types of genomic and epigenomic data.}, }
Endnote
%0 Thesis %A Halachev, Konstantin %Y Lengauer, Thomas %Y Bock, Christoph %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T EpiGRAPHregression: A toolkit for (epi-)genomic correlation analysis and prediction of quantitative attributes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-22BA-B %F EDOC: 314491 %F OTHER: Local-ID: C125673F004B2D7B-56DA5A90E26AE559C12572350051361B-HalachevMaster2006 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2006 %V master %9 master %X Five years ago, the human genome sequence was published, an important milestone towards understanding human biology. However, basic cell processes cannot be explained by the genome sequence alone. Instead, further layers of control such as the epigenome will be important for significant advances towards better understanding of normal and disease-related phenotypes. A new research field in computational biology is currently emerging that is concerned with the analysis of functional information beyond the human genome sequence. Our goal is to provide biologists with means to navigate the large amounts of epigenetic data and with tools to screen these data for biologically interesting associations. We developed a statistical learning methodology that facilitates mapping of epigenetic data against the human genome, identifies areas of over- and underrepresentation, and finds significant correlations with DNA-related attributes. We implemented this methodology in a software toolkit called EpiGRAPHregression. EpiGRAPHregression is a prototype of a genome analysis tool that enables the user to analyze relationships between many attributes, and it provides a quick test whether a newly analyzed attribute can be efficiently predicted from already known attributes. Thereby, EpiGRAPHregression may significantly speed up the analysis of new types of genomic and epigenomic data.
[128]
V. Osipov, “A polynomial Time Randomized Parallel Approximation Algorithm for Finding Heavy Planar Subgraphs,” Universität des Saarlandes, Saarbrücken, 2006.
Abstract
We provide an approximation algorithm for the Maximum Weight Planar Subgraph problem, the NP-hard problem of finding a heaviest planar subgraph in an edge-weighted graph G. In the general case our algorithm has performance ratio at least 1/3+1/72 matching the best algorithm known so far, though in several special cases we prove stronger results. In particular, we obtain performance ratio 2/3 (in- stead of 7/12) for the NP-hard Maximum Weight Outerplanar Sub- graph problem meeting the performance ratio of the best algorithm for the unweighted case. When the maximum weight planar subgraph is one of several special types of Hamiltonian graphs, we show performance ratios at least 2/5 and 4/9 (instead of 1/3 + 1/72), and 1/2 (instead of 4/9) for the unweighted case.
Export
BibTeX
@mastersthesis{Osipov2006, TITLE = {A polynomial Time Randomized Parallel Approximation Algorithm for Finding Heavy Planar Subgraphs}, AUTHOR = {Osipov, Vitali}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We provide an approximation algorithm for the Maximum Weight Planar Subgraph problem, the NP-hard problem of finding a heaviest planar subgraph in an edge-weighted graph G. In the general case our algorithm has performance ratio at least 1/3+1/72 matching the best algorithm known so far, though in several special cases we prove stronger results. In particular, we obtain performance ratio 2/3 (in- stead of 7/12) for the NP-hard Maximum Weight Outerplanar Sub- graph problem meeting the performance ratio of the best algorithm for the unweighted case. When the maximum weight planar subgraph is one of several special types of Hamiltonian graphs, we show performance ratios at least 2/5 and 4/9 (instead of 1/3 + 1/72), and 1/2 (instead of 4/9) for the unweighted case.}, }
Endnote
%0 Thesis %A Osipov, Vitali %Y Bl&#228;ser^, Markus %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T A polynomial Time Randomized Parallel Approximation Algorithm for Finding Heavy Planar Subgraphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D5A8-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2006 %V master %9 master %X We provide an approximation algorithm for the Maximum Weight Planar Subgraph problem, the NP-hard problem of finding a heaviest planar subgraph in an edge-weighted graph G. In the general case our algorithm has performance ratio at least 1/3+1/72 matching the best algorithm known so far, though in several special cases we prove stronger results. In particular, we obtain performance ratio 2/3 (in- stead of 7/12) for the NP-hard Maximum Weight Outerplanar Sub- graph problem meeting the performance ratio of the best algorithm for the unweighted case. When the maximum weight planar subgraph is one of several special types of Hamiltonian graphs, we show performance ratios at least 2/5 and 4/9 (instead of 1/3 + 1/72), and 1/2 (instead of 4/9) for the unweighted case.
[129]
L. Tolosi, “Analysis of Array CGH Data for the Estimation of Genetic Tumor Progression,” Universität des Saarlandes, Saarbrücken, 2006.
Abstract
In cancer research, prediction of time to death or relapse is important for a meaningful tumor classification and selecting appropriate therapies. The accumulation of genetic alterations during tumor progression can be used for the assessment of the genetic status of the tumor. ArrayCGH technology is used to measure genomic amplifications and deletions, with a high resolution that allows the detection of down to single genes copy number changes. \\\\We propose an automated method for analysis of cancer mutations accumulation based on statistical analysis of arrayCGH data. The method consists of the four steps: arrayCGH smoothing, aberrations detection, consensus analysis and oncogenetic tree models estimation. For the second and third steps, we propose new algorithmic solutions. First, we use the adaptive weights smoothing-based algorithm GLAD for identifying regions of constant copy number. Then, in order to select regions of gain and loss, we fit robust normals to the smoothed Log$_2$Ratios of each CGH array and choose appropriate significance cutoffs. The consensus analysis step consists of an automated selection of recurrent aberrant regions when multiple CGH experiments on the same tumor type are available. We propose to associate $p$-values to each measured genomic position and to select the regions where the $p$-value is sufficiently small. \\\\The aberrant regions computed by our method can be further used to estimate evolutionary trees, which model the dependencies between genetic mutations and can help to predict tumor progression stages and survival times. \\\\We applied our method to two arrayCGH data sets obtained from prostate cancer and glioblastoma patients, respectively. The results confirm previous knowledge on the genetic mutations specific to these types of cancer, but also bring out new regions, often reducing to single genes, due to the high resolution of arrayCGH measurements. An oncogenetic tree mixture model fitted to the Prostate Cancer data set shows two distinct evolutionary patterns discriminating between two different cell lines. Moreover, when used as clustering features, the genetic mutations our algorithm outputs separate well arrays representing 4 different cell lines, proving that we extract meaningful information. }
Export
BibTeX
@mastersthesis{Tolosi2006, TITLE = {Analysis of Array {CGH} Data for the Estimation of Genetic Tumor Progression}, AUTHOR = {Tolosi, Laura}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125673F004B2D7B-0EA512C11180051FC12572350050DF02-Tolosi2006}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {In cancer research, prediction of time to death or relapse is important for a meaningful tumor classification and selecting appropriate therapies. The accumulation of genetic alterations during tumor progression can be used for the assessment of the genetic status of the tumor. ArrayCGH technology is used to measure genomic amplifications and deletions, with a high resolution that allows the detection of down to single genes copy number changes. \\\\We propose an automated method for analysis of cancer mutations accumulation based on statistical analysis of arrayCGH data. The method consists of the four steps: arrayCGH smoothing, aberrations detection, consensus analysis and oncogenetic tree models estimation. For the second and third steps, we propose new algorithmic solutions. First, we use the adaptive weights smoothing-based algorithm GLAD for identifying regions of constant copy number. Then, in order to select regions of gain and loss, we fit robust normals to the smoothed Log$_2$Ratios of each CGH array and choose appropriate significance cutoffs. The consensus analysis step consists of an automated selection of recurrent aberrant regions when multiple CGH experiments on the same tumor type are available. We propose to associate $p$-values to each measured genomic position and to select the regions where the $p$-value is sufficiently small. \\\\The aberrant regions computed by our method can be further used to estimate evolutionary trees, which model the dependencies between genetic mutations and can help to predict tumor progression stages and survival times. \\\\We applied our method to two arrayCGH data sets obtained from prostate cancer and glioblastoma patients, respectively. The results confirm previous knowledge on the genetic mutations specific to these types of cancer, but also bring out new regions, often reducing to single genes, due to the high resolution of arrayCGH measurements. An oncogenetic tree mixture model fitted to the Prostate Cancer data set shows two distinct evolutionary patterns discriminating between two different cell lines. Moreover, when used as clustering features, the genetic mutations our algorithm outputs separate well arrays representing 4 different cell lines, proving that we extract meaningful information. }}, }
Endnote
%0 Thesis %A Tolosi, Laura %Y Rahnenf&#252;her, J&#246;rg %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Analysis of Array CGH Data for the Estimation of Genetic Tumor Progression : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-21A7-1 %F EDOC: 314521 %F OTHER: Local-ID: C125673F004B2D7B-0EA512C11180051FC12572350050DF02-Tolosi2006 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2006 %V master %9 master %X In cancer research, prediction of time to death or relapse is important for a meaningful tumor classification and selecting appropriate therapies. The accumulation of genetic alterations during tumor progression can be used for the assessment of the genetic status of the tumor. ArrayCGH technology is used to measure genomic amplifications and deletions, with a high resolution that allows the detection of down to single genes copy number changes. \\\\We propose an automated method for analysis of cancer mutations accumulation based on statistical analysis of arrayCGH data. The method consists of the four steps: arrayCGH smoothing, aberrations detection, consensus analysis and oncogenetic tree models estimation. For the second and third steps, we propose new algorithmic solutions. First, we use the adaptive weights smoothing-based algorithm GLAD for identifying regions of constant copy number. Then, in order to select regions of gain and loss, we fit robust normals to the smoothed Log$_2$Ratios of each CGH array and choose appropriate significance cutoffs. The consensus analysis step consists of an automated selection of recurrent aberrant regions when multiple CGH experiments on the same tumor type are available. We propose to associate $p$-values to each measured genomic position and to select the regions where the $p$-value is sufficiently small. \\\\The aberrant regions computed by our method can be further used to estimate evolutionary trees, which model the dependencies between genetic mutations and can help to predict tumor progression stages and survival times. \\\\We applied our method to two arrayCGH data sets obtained from prostate cancer and glioblastoma patients, respectively. The results confirm previous knowledge on the genetic mutations specific to these types of cancer, but also bring out new regions, often reducing to single genes, due to the high resolution of arrayCGH measurements. An oncogenetic tree mixture model fitted to the Prostate Cancer data set shows two distinct evolutionary patterns discriminating between two different cell lines. Moreover, when used as clustering features, the genetic mutations our algorithm outputs separate well arrays representing 4 different cell lines, proving that we extract meaningful information. }
2005
[130]
D. Ajwani, “Design, Implementation and Experimental Study of External Memory BFS Algorithms,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Ajwani05, TITLE = {Design, Implementation and Experimental Study of External Memory {BFS} Algorithms}, AUTHOR = {Ajwani, Deepak}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256428004B93B8-5BA0DAF2ECA46112C1256FBE00534BFC-Ajwani05}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Ajwani, Deepak %Y Meyer, Ulrich %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Design, Implementation and Experimental Study of External Memory BFS Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2564-4 %F EDOC: 279155 %F OTHER: Local-ID: C1256428004B93B8-5BA0DAF2ECA46112C1256FBE00534BFC-Ajwani05 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[131]
A. Alexa, “Integrating the GO Graph Structure in Scoring the Significance of Gene Ontology Terms,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Alexa2005a, TITLE = {Integrating the {GO} Graph Structure in Scoring the Significance of Gene Ontology Terms}, AUTHOR = {Alexa, Adrian}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Alexa, Adrian %Y Rahnenf&#252;her, J&#246;rg %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Integrating the GO Graph Structure in Scoring the Significance of Gene Ontology Terms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-0922-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[132]
S. Chernov, “Result Merging in a Peer-to-Peer Web Search Engine,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Chernov2005, TITLE = {Result Merging in a Peer-to-Peer Web Search Engine}, AUTHOR = {Chernov, Sergey}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-2F6545166DCC932CC1256FBF003A2575-Chernov2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Chernov, Sergey %Y Weikum, Gerhard %A referee: Zimmer, Christian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations %T Result Merging in a Peer-to-Peer Web Search Engine : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-278E-5 %F EDOC: 278933 %F OTHER: Local-ID: C1256DBF005F876D-2F6545166DCC932CC1256FBF003A2575-Chernov2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[133]
S. Cotton, “Satisfiability Checking with Difference Constraints,” Universität des Saarlandes, Saarbrücken, 2005.
Abstract
This thesis studies the problem of determining the satisfiability of a Boolean combination of binary difference constraints of the form x-y <= c where x and y are numeric variables and c is a constant. In particular, we present an incremental and model-based interpreter for the theory of difference constraints in the context of a generic Boolean satisfiability checking procedure capable of incorporating interpreters for arbitrary theories. We show how to use the model based approach to efficiently make inferences with the option of complete inference.
Export
BibTeX
@mastersthesis{ScottCotton2005, TITLE = {Satisfiability Checking with Difference Constraints}, AUTHOR = {Cotton, Scott}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {This thesis studies the problem of determining the satisfiability of a Boolean combination of binary difference constraints of the form x-y <= c where x and y are numeric variables and c is a constant. In particular, we present an incremental and model-based interpreter for the theory of difference constraints in the context of a generic Boolean satisfiability checking procedure capable of incorporating interpreters for arbitrary theories. We show how to use the model based approach to efficiently make inferences with the option of complete inference.}, }
Endnote
%0 Thesis %A Cotton, Scott %Y Podelski, Andreas %A referee: Finkbeiner, Bernd %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society External Organizations %T Satisfiability Checking with Difference Constraints : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D5C9-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master %X This thesis studies the problem of determining the satisfiability of a Boolean combination of binary difference constraints of the form x-y <= c where x and y are numeric variables and c is a constant. In particular, we present an incremental and model-based interpreter for the theory of difference constraints in the context of a generic Boolean satisfiability checking procedure capable of incorporating interpreters for arbitrary theories. We show how to use the model based approach to efficiently make inferences with the option of complete inference.
[134]
S. S. Hussain, “Enconding a Hierarchical Proof Data Structure for Contextual Reasoning in a Logical Framework,” Universität des Saarlandes, Saarbrücken, 2005.
Abstract
For many application such as mathematical assistant systems, an effective communication between the system and its users is critical for the acceptance of a system. Explaining the computer-supported proofs in natural language can enhance the understanding of the users. We define a function that encodes the proofs generated from the computer-supported theorem proving system MEGA into TWEGA, which is the input language of the proof presentation system P.rex. This encoding enables the natural language explanation of MEGA proofs in P.rex
Export
BibTeX
@mastersthesis{HussainS2005, TITLE = {Enconding a Hierarchical Proof Data Structure for Contextual Reasoning in a Logical Framework}, AUTHOR = {Hussain, Syed Sajjad}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {For many application such as mathematical assistant systems, an effective communication between the system and its users is critical for the acceptance of a system. Explaining the computer-supported proofs in natural language can enhance the understanding of the users. We define a function that encodes the proofs generated from the computer-supported theorem proving system MEGA into TWEGA, which is the input language of the proof presentation system P.rex. This encoding enables the natural language explanation of MEGA proofs in P.rex}, }
Endnote
%0 Thesis %A Hussain, Syed Sajjad %Y Siekmann, J&#246;rg %A referee: Benzm&#252;ller, Christoph %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Enconding a Hierarchical Proof Data Structure for Contextual Reasoning in a Logical Framework : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D5CB-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master %X For many application such as mathematical assistant systems, an effective communication between the system and its users is critical for the acceptance of a system. Explaining the computer-supported proofs in natural language can enhance the understanding of the users. We define a function that encodes the proofs generated from the computer-supported theorem proving system MEGA into TWEGA, which is the input language of the proof presentation system P.rex. This encoding enables the natural language explanation of MEGA proofs in P.rex
[135]
G. Ifrim, “A Bayesian Learning Approach to Concept-Based Document Classification,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Ifrim2005, TITLE = {A Bayesian Learning Approach to Concept-Based Document Classification}, AUTHOR = {Ifrim, Georgiana}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-3A9028DC2228AB1BC1256FBF003B1738-Ifrim2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Ifrim, Georgiana %Y Weikum, Gerhard %A referee: Theobald, Martin %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T A Bayesian Learning Approach to Concept-Based Document Classification : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2581-2 %F EDOC: 278931 %F OTHER: Local-ID: C1256DBF005F876D-3A9028DC2228AB1BC1256FBF003B1738-Ifrim2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[136]
N. Kozlova, “Automatic Ontology Extraction for Document Classification,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Kozlova2005, TITLE = {Automatic Ontology Extraction for Document Classification}, AUTHOR = {Kozlova, Natalia}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-B009BA9471A70191C1256FBF003AF86C-Kozlova2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Kozlova, Natalia %Y Weikum, Gerhard %A referee: Theobald, Martin %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Automatic Ontology Extraction for Document Classification : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-25E4-2 %F EDOC: 278927 %F OTHER: Local-ID: C1256DBF005F876D-B009BA9471A70191C1256FBF003AF86C-Kozlova2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[137]
A. Moleda, “Probabilistic Scheduling for Top-k Query Processing,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Moleda2005, TITLE = {Probabilistic Scheduling for Top-k Query Processing}, AUTHOR = {Moleda, Anna}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-C358CAA736C9613FC12570C1002A6CB8-Moleda2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Moleda, Anna %Y Weikum, Gerhard %A referee: Bast, Hannah %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Probabilistic Scheduling for Top-k Query Processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2503-B %F EDOC: 278879 %F OTHER: Local-ID: C1256DBF005F876D-C358CAA736C9613FC12570C1002A6CB8-Moleda2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[138]
O. Papapetrou, “On the Usage of Global Document Occurrences in Peer-to-Peer Information Systems,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Papapetrou2005, TITLE = {On the Usage of Global Document Occurrences in Peer-to-Peer Information Systems}, AUTHOR = {Papapetrou, Odysseas}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-84E02DEB41937C61C12570A6003DA112-Papapetrou2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Papapetrou, Odysseas %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T On the Usage of Global Document Occurrences in Peer-to-Peer Information Systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2739-5 %F EDOC: 278876 %F OTHER: Local-ID: C1256DBF005F876D-84E02DEB41937C61C12570A6003DA112-Papapetrou2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[139]
R. Piskac, “Formal correctness of Result Checking for Priority Queues,” Universität des Saarlandes, Saarbrücken, 2005.
Abstract
We formally prove the correctness of the time super-efficient result checker for priority queues, which is implemented in LEDA [15]. A priority queue is a data structure that supports insertion, deletion and retrieval of the minimal element, relative to some order. A result checker for priority queues is a data structure that monitors the input and output of the priority queue. Whenever the user requests a minimal element, it checks that the returned element is indeed minimal. In order to do this, the checker makes use of a system of lower bounds. We have verified that, for every execution sequence in which the checker accepts the outputs, the priority queue returned the correct minimal elements. For the formal verification, we used the first-order theorem prover Saturate [25].
Export
BibTeX
@mastersthesis{Piskac2005, TITLE = {Formal correctness of Result Checking for Priority Queues}, AUTHOR = {Piskac, Ruzica}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {We formally prove the correctness of the time super-efficient result checker for priority queues, which is implemented in LEDA [15]. A priority queue is a data structure that supports insertion, deletion and retrieval of the minimal element, relative to some order. A result checker for priority queues is a data structure that monitors the input and output of the priority queue. Whenever the user requests a minimal element, it checks that the returned element is indeed minimal. In order to do this, the checker makes use of a system of lower bounds. We have verified that, for every execution sequence in which the checker accepts the outputs, the priority queue returned the correct minimal elements. For the formal verification, we used the first-order theorem prover Saturate [25].}, }
Endnote
%0 Thesis %A Piskac, Ruzica %Y Ganzinger, Harald %A referee: Podelski, Andreas %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society %T Formal correctness of Result Checking for Priority Queues : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D74E-3 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master %X We formally prove the correctness of the time super-efficient result checker for priority queues, which is implemented in LEDA [15]. A priority queue is a data structure that supports insertion, deletion and retrieval of the minimal element, relative to some order. A result checker for priority queues is a data structure that monitors the input and output of the priority queue. Whenever the user requests a minimal element, it checks that the returned element is indeed minimal. In order to do this, the checker makes use of a system of lower bounds. We have verified that, for every execution sequence in which the checker accepts the outputs, the priority queue returned the correct minimal elements. For the formal verification, we used the first-order theorem prover Saturate [25].
[140]
E. Pyrga, “Shortest Paths in Time-Dependent Networks ant their Applications,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Evangelia2005, TITLE = {Shortest Paths in Time-Dependent Networks ant their Applications}, AUTHOR = {Pyrga, Evangelia}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Pyrga, Evangelia %Y Mehlhorn, Kurt %A referee: Zaroliagis, Christos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Shortest Paths in Time-Dependent Networks ant their Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D754-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[141]
S. Ray, “Counting Straight-Edge Tringulation of Planar Point Sets,” Universität des Saarlandes, Saarbrücken, 2005.
Abstract
A triangulation of a finite set S of points in R2 is a maximal set of line segments with disjoint interiors whose end points are in S. A set of points in the plane can have many triangulations and it is known that a set of n points always has more than (2.33n)[7] and fewer than 59n-(log(n)) [4] triangulation. However, these bounds are not tight. Also, counting the number of triangulation of a given a set of points efficiently remains an open problem. The fastest method so far is based on the so called t-path method [5] and it was the first algorithm having a running time sublinear on the number of triangulations counted. In this thesis, we consider a slightly different approach to counting the number of triangulations. Although we are unable to prove any non-trivial result about our algorithm yet, empirical results show that the running time of our algorithm for a set of n points is o(nlog2nT(n)) where T(n) is the number of triangulations counted, and in practice it performs much better than the earlier algorithm.
Export
BibTeX
@mastersthesis{Saurabh, TITLE = {Counting Straight-Edge Tringulation of Planar Point Sets}, AUTHOR = {Ray, Saurabh}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {A triangulation of a finite set S of points in R2 is a maximal set of line segments with disjoint interiors whose end points are in S. A set of points in the plane can have many triangulations and it is known that a set of n points always has more than (2.33n)[7] and fewer than 59n-(log(n)) [4] triangulation. However, these bounds are not tight. Also, counting the number of triangulation of a given a set of points efficiently remains an open problem. The fastest method so far is based on the so called t-path method [5] and it was the first algorithm having a running time sublinear on the number of triangulations counted. In this thesis, we consider a slightly different approach to counting the number of triangulations. Although we are unable to prove any non-trivial result about our algorithm yet, empirical results show that the running time of our algorithm for a set of n points is o(nlog2nT(n)) where T(n) is the number of triangulations counted, and in practice it performs much better than the earlier algorithm.}, }
Endnote
%0 Thesis %A Ray, Saurabh %Y Seidel, Raimund %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Counting Straight-Edge Tringulation of Planar Point Sets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D757-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master %X A triangulation of a finite set S of points in R2 is a maximal set of line segments with disjoint interiors whose end points are in S. A set of points in the plane can have many triangulations and it is known that a set of n points always has more than (2.33n)[7] and fewer than 59n-(log(n)) [4] triangulation. However, these bounds are not tight. Also, counting the number of triangulation of a given a set of points efficiently remains an open problem. The fastest method so far is based on the so called t-path method [5] and it was the first algorithm having a running time sublinear on the number of triangulations counted. In this thesis, we consider a slightly different approach to counting the number of triangulations. Although we are unable to prove any non-trivial result about our algorithm yet, empirical results show that the running time of our algorithm for a set of n points is o(nlog2nT(n)) where T(n) is the number of triangulations counted, and in practice it performs much better than the earlier algorithm.
[142]
A. Schlicker, “A Global Approach to Comparative Genomics: Comparison of Functional Annotation over the Taxonomic Tree,” Universität des Saarlandes, Saarbrücken, 2005.
Abstract
Genome sequencing projects produce large amounts of data that are stored in sequence databases. Entries in these databases are annotated using the results of different experiments and computational methods. These methods usually rely on homology detection based on sequence similarity searches. Gene Ontology (GO) provides a standard vocabulary of functional terms, and allows a coherent annotation of gene products. These annotations can be used as a basis for new methods that compare gene products on the basis of their molecular function and biological role. In this thesis, we present a new approach for integrating the species taxonomy, protein family classifications and GO annotations. We implemented a database and a client application, GOTaxExplorer, that can be used to perform queries with a simplified language and to process and visualize the results. It allows to compare different taxonomic groups regarding the protein families or the protein functions associated with the different genomes. We developed a method for comparing GO annotations which includes a measure of functional similarity between gene products. The method was able to find functional relationships even if the proteins show no significant sequence similarity. We provide results for different application scenarios, in particular for the identification of new drug targets.
Export
BibTeX
@mastersthesis{Schlicker2005, TITLE = {A Global Approach to Comparative Genomics: Comparison of Functional Annotation over the Taxonomic Tree}, AUTHOR = {Schlicker, Andreas}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125673F004B2D7B-76B88A36B2C63828C12570EB004472E7-Schlicker2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Genome sequencing projects produce large amounts of data that are stored in sequence databases. Entries in these databases are annotated using the results of different experiments and computational methods. These methods usually rely on homology detection based on sequence similarity searches. Gene Ontology (GO) provides a standard vocabulary of functional terms, and allows a coherent annotation of gene products. These annotations can be used as a basis for new methods that compare gene products on the basis of their molecular function and biological role. In this thesis, we present a new approach for integrating the species taxonomy, protein family classifications and GO annotations. We implemented a database and a client application, GOTaxExplorer, that can be used to perform queries with a simplified language and to process and visualize the results. It allows to compare different taxonomic groups regarding the protein families or the protein functions associated with the different genomes. We developed a method for comparing GO annotations which includes a measure of functional similarity between gene products. The method was able to find functional relationships even if the proteins show no significant sequence similarity. We provide results for different application scenarios, in particular for the identification of new drug targets.}, }
Endnote
%0 Thesis %A Schlicker, Andreas %Y Domingues, Francisco S. %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T A Global Approach to Comparative Genomics: Comparison of Functional Annotation over the Taxonomic Tree : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-258D-A %F EDOC: 279036 %F OTHER: Local-ID: C125673F004B2D7B-76B88A36B2C63828C12570EB004472E7-Schlicker2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master %X Genome sequencing projects produce large amounts of data that are stored in sequence databases. Entries in these databases are annotated using the results of different experiments and computational methods. These methods usually rely on homology detection based on sequence similarity searches. Gene Ontology (GO) provides a standard vocabulary of functional terms, and allows a coherent annotation of gene products. These annotations can be used as a basis for new methods that compare gene products on the basis of their molecular function and biological role. In this thesis, we present a new approach for integrating the species taxonomy, protein family classifications and GO annotations. We implemented a database and a client application, GOTaxExplorer, that can be used to perform queries with a simplified language and to process and visualize the results. It allows to compare different taxonomic groups regarding the protein families or the protein functions associated with the different genomes. We developed a method for comparing GO annotations which includes a measure of functional similarity between gene products. The method was able to find functional relationships even if the proteins show no significant sequence similarity. We provide results for different application scenarios, in particular for the identification of new drug targets.
[143]
P. Serdyukov, “Query Routing in Peer-to-Peer Web Search,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Serdyukov2005, TITLE = {Query Routing in Peer-to-Peer Web Search}, AUTHOR = {Serdyukov, Pavel}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256DBF005F876D-C84939A41A9A1991C1256FBF003ABFD0-Serdyukov2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Serdyukov, Pavel %Y Weikum, Gerhard %A referee: Michel, Sebastian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Query Routing in Peer-to-Peer Web Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-277B-F %F EDOC: 278873 %F OTHER: Local-ID: C1256DBF005F876D-C84939A41A9A1991C1256FBF003ABFD0-Serdyukov2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[144]
W.-I. Siu, “Computational Prediction of MHC-Peptide Interaction,” Universität des Saarlandes, Saarbrücken, 2005.
Abstract
T-cell recognition is a critical step in regulating immune response. Activation of Cytotoxic T-cell requires the MHC class I molecules in complex with specific peptides and present them on the surface of the cell. Identification of potential ligands to MHC is therefore important for understanding disease pathogenesis and aiding vaccine design. Despite years of effort in the field, reliable prediction of MHC ligands remains a difficult task. It is reported that only one out of 100 to 200 potential binders actually binds. Methods based on sequence data alone are fast but fail to capture all binding patterns, while the structure based methods are more promising but far too slow for large-scale screening of protein sequences. In this work, we propose a new method to the prediction problem. It is based on the assumption that peptide binding is an aggregrate effect of contributions from independent binding of residues. Compatibility of each amino acid in the MHC binding pockets is examined thoroughly by molecular dynamics simulation. Values of energy terms important for binding are collected from the generated ensembles, and are used to produce the allele-specific scoring matrix. Each entry in this matrix represents the favorableness in terms of a particular "feature" of an amino acid in a binding position. Prediction models based on machine learning techniques are then trained to discriminate binders from non-binders. Our method is compared to two other sequence-based methods using HLA-A*0201 9-mer sequences. Three publicly available data sets are used: the MHCPEP, SYFPEITHI data sets, and the HXB2 genome. In overall, our method successfully improves the prediction accuracy with higher specificity. Its robustness to different sizes and ratios of training data proves its ability to provide reliable prediction by less dependency on the sequence data. The method also shows better generalizability in cross-allele predictions. For predicting peptide bound conformations, our preliminary approach based on energy minimization gives the satisfactory result of a backbone RMSD at 1.7 to 1.88 A as compared to the crystal structures.
Export
BibTeX
@mastersthesis{Siu2005, TITLE = {Computational Prediction of {MHC}-Peptide Interaction}, AUTHOR = {Siu, Weng-In}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {T-cell recognition is a critical step in regulating immune response. Activation of Cytotoxic T-cell requires the MHC class I molecules in complex with specific peptides and present them on the surface of the cell. Identification of potential ligands to MHC is therefore important for understanding disease pathogenesis and aiding vaccine design. Despite years of effort in the field, reliable prediction of MHC ligands remains a difficult task. It is reported that only one out of 100 to 200 potential binders actually binds. Methods based on sequence data alone are fast but fail to capture all binding patterns, while the structure based methods are more promising but far too slow for large-scale screening of protein sequences. In this work, we propose a new method to the prediction problem. It is based on the assumption that peptide binding is an aggregrate effect of contributions from independent binding of residues. Compatibility of each amino acid in the MHC binding pockets is examined thoroughly by molecular dynamics simulation. Values of energy terms important for binding are collected from the generated ensembles, and are used to produce the allele-specific scoring matrix. Each entry in this matrix represents the favorableness in terms of a particular "feature" of an amino acid in a binding position. Prediction models based on machine learning techniques are then trained to discriminate binders from non-binders. Our method is compared to two other sequence-based methods using HLA-A*0201 9-mer sequences. Three publicly available data sets are used: the MHCPEP, SYFPEITHI data sets, and the HXB2 genome. In overall, our method successfully improves the prediction accuracy with higher specificity. Its robustness to different sizes and ratios of training data proves its ability to provide reliable prediction by less dependency on the sequence data. The method also shows better generalizability in cross-allele predictions. For predicting peptide bound conformations, our preliminary approach based on energy minimization gives the satisfactory result of a backbone RMSD at 1.7 to 1.88 A as compared to the crystal structures.}, }
Endnote
%0 Thesis %A Siu, Weng-In %Y Antes, Iris %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Computational Prediction of MHC-Peptide Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F3C3-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master %X T-cell recognition is a critical step in regulating immune response. Activation of Cytotoxic T-cell requires the MHC class I molecules in complex with specific peptides and present them on the surface of the cell. Identification of potential ligands to MHC is therefore important for understanding disease pathogenesis and aiding vaccine design. Despite years of effort in the field, reliable prediction of MHC ligands remains a difficult task. It is reported that only one out of 100 to 200 potential binders actually binds. Methods based on sequence data alone are fast but fail to capture all binding patterns, while the structure based methods are more promising but far too slow for large-scale screening of protein sequences. In this work, we propose a new method to the prediction problem. It is based on the assumption that peptide binding is an aggregrate effect of contributions from independent binding of residues. Compatibility of each amino acid in the MHC binding pockets is examined thoroughly by molecular dynamics simulation. Values of energy terms important for binding are collected from the generated ensembles, and are used to produce the allele-specific scoring matrix. Each entry in this matrix represents the favorableness in terms of a particular "feature" of an amino acid in a binding position. Prediction models based on machine learning techniques are then trained to discriminate binders from non-binders. Our method is compared to two other sequence-based methods using HLA-A*0201 9-mer sequences. Three publicly available data sets are used: the MHCPEP, SYFPEITHI data sets, and the HXB2 genome. In overall, our method successfully improves the prediction accuracy with higher specificity. Its robustness to different sizes and ratios of training data proves its ability to provide reliable prediction by less dependency on the sequence data. The method also shows better generalizability in cross-allele predictions. For predicting peptide bound conformations, our preliminary approach based on energy minimization gives the satisfactory result of a backbone RMSD at 1.7 to 1.88 A as compared to the crystal structures.
[145]
J. Sliwerski, “Locating the Risk of Changes,” Universität des Saarlandes, Saarbrücken, 2005.
Abstract
As a software system evolves, programmers make changes which sometimes lead to problems. The risk of later problems significantly depends on the location of the change. Which are the locations where changes impose the greatest risk? We introduce a set of automated techniques that relate a version history archive (such as CVS) with a bus database (such as BUGZILLA) to detect those locations where changes have been risky in the past. Our experiments show that simple measures have low accuracy in locating files that are most risky to change
Export
BibTeX
@mastersthesis{Sliwerski2005, TITLE = {Locating the Risk of Changes}, AUTHOR = {Sliwerski, Jacek}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {As a software system evolves, programmers make changes which sometimes lead to problems. The risk of later problems significantly depends on the location of the change. Which are the locations where changes impose the greatest risk? We introduce a set of automated techniques that relate a version history archive (such as CVS) with a bus database (such as BUGZILLA) to detect those locations where changes have been risky in the past. Our experiments show that simple measures have low accuracy in locating files that are most risky to change}, }
Endnote
%0 Thesis %A Sliwerski, Jacek %Y Zeller, Andreas %A referee: Zimmermann, Thomas %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Locating the Risk of Changes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F3C9-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master %X As a software system evolves, programmers make changes which sometimes lead to problems. The risk of later problems significantly depends on the location of the change. Which are the locations where changes impose the greatest risk? We introduce a set of automated techniques that relate a version history archive (such as CVS) with a bus database (such as BUGZILLA) to detect those locations where changes have been risky in the past. Our experiments show that simple measures have low accuracy in locating files that are most risky to change
[146]
F. M. Suchanek, “Ontological Reasoning for Natural Language Understanding,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Suchanek:DA:2005, TITLE = {Ontological Reasoning for Natural Language Understanding}, AUTHOR = {Suchanek, Fabian M.}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256104005ECAFC-8D87190D5DB624ADC1256FE200545742-Suchanek:DA:2005}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Suchanek, Fabian M. %Y Weikum, Gerhard %A referee: Baumgartner, Peter %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society %T Ontological Reasoning for Natural Language Understanding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-254C-B %F EDOC: 279073 %F OTHER: Local-ID: C1256104005ECAFC-8D87190D5DB624ADC1256FE200545742-Suchanek:DA:2005 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
[147]
J. Yin, “Model Selection for Mixtures of Mutagenetic Trees,” Universität des Saarlandes, Saarbrücken, 2005.
Export
BibTeX
@mastersthesis{Yin2005a, TITLE = {Model Selection for Mixtures of Mutagenetic Trees}, AUTHOR = {Yin, Junming}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Thesis %A Yin, Junming %Y Lengauer, Thomas %A referee: Beerenwinkel, Niko %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Model Selection for Mixtures of Mutagenetic Trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-091E-D %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2005 %V master %9 master
2004
[148]
N. Ahmed, “BRDF Reconstruction from Video Streams of Multi-View Recordings,” Universität des Saarlandes, Saarbrücken, 2004.
Abstract
Synthesizing photorealistic images is an active area of research in computer graphics. Image based rendering combined with inverse rendering methods is used to generate photorealistic images from real world images under novel illumination conditions. Traditionally, very high-quality real world images of static objects, obtained under known viewing and lighting conditions are used in inverse rendering for the measurement of surface reflectance properties. This thesis focuses on surface material reconstruction of dynamic objects from video streams of multi-view recordings. Working with fairly low resolution movie streams of a dynamic object recorded in known viewing conditions and a geometry model tracked through all time steps, we approximate the best light source configuration, and measure the bidirectional reflectance distribution function of the object. We construct diffuse and specular maps for the whole sequence, and a diffuse correction map for each time step. We have applied our method to sequences of a human actor and are now able to synthesize views of the actor in arbitrary poses under arbitrary lighting conditions.
Export
BibTeX
@mastersthesis{Naveed:MT:2004, TITLE = {{BRDF} Reconstruction from Video Streams of Multi-View Recordings}, AUTHOR = {Ahmed, Naveed}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-FB330D42C79FCE5BC1256F870041E593-Naveed:MT:2004}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Synthesizing photorealistic images is an active area of research in computer graphics. Image based rendering combined with inverse rendering methods is used to generate photorealistic images from real world images under novel illumination conditions. Traditionally, very high-quality real world images of static objects, obtained under known viewing and lighting conditions are used in inverse rendering for the measurement of surface reflectance properties. This thesis focuses on surface material reconstruction of dynamic objects from video streams of multi-view recordings. Working with fairly low resolution movie streams of a dynamic object recorded in known viewing conditions and a geometry model tracked through all time steps, we approximate the best light source configuration, and measure the bidirectional reflectance distribution function of the object. We construct diffuse and specular maps for the whole sequence, and a diffuse correction map for each time step. We have applied our method to sequences of a human actor and are now able to synthesize views of the actor in arbitrary poses under arbitrary lighting conditions.}, }
Endnote
%0 Thesis %A Ahmed, Naveed %Y Lensche, Hendrik %A referee: Magnor, Marcus A. %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Graphics - Optics - Vision, MPI for Informatics, Max Planck Society %T BRDF Reconstruction from Video Streams of Multi-View Recordings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2A42-A %F EDOC: 231919 %F OTHER: Local-ID: C125675300671F7B-FB330D42C79FCE5BC1256F870041E593-Naveed:MT:2004 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master %X Synthesizing photorealistic images is an active area of research in computer graphics. Image based rendering combined with inverse rendering methods is used to generate photorealistic images from real world images under novel illumination conditions. Traditionally, very high-quality real world images of static objects, obtained under known viewing and lighting conditions are used in inverse rendering for the measurement of surface reflectance properties. This thesis focuses on surface material reconstruction of dynamic objects from video streams of multi-view recordings. Working with fairly low resolution movie streams of a dynamic object recorded in known viewing conditions and a geometry model tracked through all time steps, we approximate the best light source configuration, and measure the bidirectional reflectance distribution function of the object. We construct diffuse and specular maps for the whole sequence, and a diffuse correction map for each time step. We have applied our method to sequences of a human actor and are now able to synthesize views of the actor in arbitrary poses under arbitrary lighting conditions.
[149]
R. Angelova, “Neighborhood Conscious Hypertext Categorization,” Universität des Saarlandes, Saarbrücken, 2004.
Abstract
A fundamental issue in statistics, pattern recognition, and machine learning is that of classification. In a traditional classification problem, we wish to assign one of k labels (or classes) to each of n objects (or documents), in a way that is consistent with some observed data available about that problem. For achieving better classification results, we try to capture the information derived by pairwise realtionships between objects, in particular hyperlinks between web documents. the usage of hyperlinks poses new problems not addressed in the extensive text classification literature. Links contain high quality seantic clues that a purely text-based classifier can not take advantage of. However, exploiting link inoframtion is non-trivial because it is noisy and a naive use of terms in the link neghborhood of a document can degrade accuracy. The problem becomes even harder when only a very small fraction of document labels ar known to the classifier and can be used for training, as it is the case in a real classification scenario. Our work is based on an algorithm proposed by Soumen Chakrabarti and uses the theory of Markov Random Fields to derive a relaxation labelling technique for the class assignment problem. We show that the extra information contaned in the hyperlinks between the documents can be explited to achieve significant improvement in the performance of classification. We implemente our algorithm in Java and ran our experiments on to sets of data obtained from the DBLP and IMDB databases. We oberved up to 5.5 improvement in the accuracy of the classification and up the 10 higher recall and precision resultls.
Export
BibTeX
@mastersthesis{Ralitsa2004, TITLE = {Neighborhood Conscious Hypertext Categorization}, AUTHOR = {Angelova, Ralitsa}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {A fundamental issue in statistics, pattern recognition, and machine learning is that of classification. In a traditional classification problem, we wish to assign one of k labels (or classes) to each of n objects (or documents), in a way that is consistent with some observed data available about that problem. For achieving better classification results, we try to capture the information derived by pairwise realtionships between objects, in particular hyperlinks between web documents. the usage of hyperlinks poses new problems not addressed in the extensive text classification literature. Links contain high quality seantic clues that a purely text-based classifier can not take advantage of. However, exploiting link inoframtion is non-trivial because it is noisy and a naive use of terms in the link neghborhood of a document can degrade accuracy. The problem becomes even harder when only a very small fraction of document labels ar known to the classifier and can be used for training, as it is the case in a real classification scenario. Our work is based on an algorithm proposed by Soumen Chakrabarti and uses the theory of Markov Random Fields to derive a relaxation labelling technique for the class assignment problem. We show that the extra information contaned in the hyperlinks between the documents can be explited to achieve significant improvement in the performance of classification. We implemente our algorithm in Java and ran our experiments on to sets of data obtained from the DBLP and IMDB databases. We oberved up to 5.5 improvement in the accuracy of the classification and up the 10 higher recall and precision resultls.}, }
Endnote
%0 Thesis %A Angelova, Ralitsa %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Neighborhood Conscious Hypertext Categorization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F483-0 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master %X A fundamental issue in statistics, pattern recognition, and machine learning is that of classification. In a traditional classification problem, we wish to assign one of k labels (or classes) to each of n objects (or documents), in a way that is consistent with some observed data available about that problem. For achieving better classification results, we try to capture the information derived by pairwise realtionships between objects, in particular hyperlinks between web documents. the usage of hyperlinks poses new problems not addressed in the extensive text classification literature. Links contain high quality seantic clues that a purely text-based classifier can not take advantage of. However, exploiting link inoframtion is non-trivial because it is noisy and a naive use of terms in the link neghborhood of a document can degrade accuracy. The problem becomes even harder when only a very small fraction of document labels ar known to the classifier and can be used for training, as it is the case in a real classification scenario. Our work is based on an algorithm proposed by Soumen Chakrabarti and uses the theory of Markov Random Fields to derive a relaxation labelling technique for the class assignment problem. We show that the extra information contaned in the hyperlinks between the documents can be explited to achieve significant improvement in the performance of classification. We implemente our algorithm in Java and ran our experiments on to sets of data obtained from the DBLP and IMDB databases. We oberved up to 5.5 improvement in the accuracy of the classification and up the 10 higher recall and precision resultls.
[150]
A. Q. Kara, “Indutive Learning Approaches in Information Extraction Analysis, Formalization, Comparison, Evaluation,” Universität des Saarlandes, Saarbrücken, 2004.
Abstract
Although Information Extraction field has been in the market since almost last two decades, it is still considered to be in its initial stage. There are at present many algorithms that are used for Information Extraction task, and they also have a good success rate, but there are no benchemarks or standard data on which they can be compared among themselves. Most of the algorithms work well in semi-structured data but they seem to fail when dealing with free text. Other algorithms fail when different types of data is required to extract from the same doument. One way, is to some how try to compare them all and then try to improve and create algorithms that are domain independent and work both efficiently and effectively. For this, we introduce an idea of formalizing Information Extractio algorithms and then to find out where can we improve them or what parts are still needed to improve the over all performance. We have in the end, described what we got by formalizing the algorithms and what can we achieve after formalinzing further algorithms.
Export
BibTeX
@mastersthesis{Kara2004, TITLE = {Indutive Learning Approaches in Information Extraction Analysis, Formalization, Comparison, Evaluation}, AUTHOR = {Kara, Abdul Qadar}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Although Information Extraction field has been in the market since almost last two decades, it is still considered to be in its initial stage. There are at present many algorithms that are used for Information Extraction task, and they also have a good success rate, but there are no benchemarks or standard data on which they can be compared among themselves. Most of the algorithms work well in semi-structured data but they seem to fail when dealing with free text. Other algorithms fail when different types of data is required to extract from the same doument. One way, is to some how try to compare them all and then try to improve and create algorithms that are domain independent and work both efficiently and effectively. For this, we introduce an idea of formalizing Information Extractio algorithms and then to find out where can we improve them or what parts are still needed to improve the over all performance. We have in the end, described what we got by formalizing the algorithms and what can we achieve after formalinzing further algorithms.}, }
Endnote
%0 Thesis %A Kara, Abdul Qadar %+ Programming Logics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society %T Indutive Learning Approaches in Information Extraction Analysis, Formalization, Comparison, Evaluation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F48E-9 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master %X Although Information Extraction field has been in the market since almost last two decades, it is still considered to be in its initial stage. There are at present many algorithms that are used for Information Extraction task, and they also have a good success rate, but there are no benchemarks or standard data on which they can be compared among themselves. Most of the algorithms work well in semi-structured data but they seem to fail when dealing with free text. Other algorithms fail when different types of data is required to extract from the same doument. One way, is to some how try to compare them all and then try to improve and create algorithms that are domain independent and work both efficiently and effectively. For this, we introduce an idea of formalizing Information Extractio algorithms and then to find out where can we improve them or what parts are still needed to improve the over all performance. We have in the end, described what we got by formalizing the algorithms and what can we achieve after formalinzing further algorithms.
[151]
C. Klein, “Controlled Perturbation for Voronoi Diagrams,” Universität des Saarlandes, Saarbrücken, 2004.
Export
BibTeX
@mastersthesis{Klein2004, TITLE = {Controlled Perturbation for {Voronoi} Diagrams}, AUTHOR = {Klein, Christian}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256428004B93B8-407186B9B72B2A1DC1256FA2005A295A-Klein2004}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, }
Endnote
%0 Thesis %A Klein, Christian %Y Mehlhorn, Kurt %A referee: Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Controlled Perturbation for Voronoi Diagrams : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2A70-2 %F EDOC: 232052 %F OTHER: Local-ID: C1256428004B93B8-407186B9B72B2A1DC1256FA2005A295A-Klein2004 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master
[152]
J. Luxenburger, “Query-log based Authority Analysis for Web Information Search,” Universität des Saarlandes, Saarbrücken, 2004.
Abstract
The ongoing explosion of web information calls for more intelligent and personalized methods towards better search result quality for advanced queries. Query logs and click streams obtained from web browsers or search engines can contribute to better quality by exploiting the collaborative recommendations that are implicitly embedded in this information. The method presented in this work incorporates the notion of query nodes into the PageRank model and integrates the implicit relevance feedback given by click streams into the automated process of authority analysis. The enhanced PageRank scores, coined QRank scores, can be computed oine; at query-time they are combined with query-specific relevance measures with virtually no overhead. In our experiments significant improvements in the precision of search results were observed, which demonstrate the eectiveness of our model.
Export
BibTeX
@mastersthesis{Luxenburger2004, TITLE = {Query-log based Authority Analysis for Web Information Search}, AUTHOR = {Luxenburger, Julia}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {The ongoing explosion of web information calls for more intelligent and personalized methods towards better search result quality for advanced queries. Query logs and click streams obtained from web browsers or search engines can contribute to better quality by exploiting the collaborative recommendations that are implicitly embedded in this information. The method presented in this work incorporates the notion of query nodes into the PageRank model and integrates the implicit relevance feedback given by click streams into the automated process of authority analysis. The enhanced PageRank scores, coined QRank scores, can be computed oine; at query-time they are combined with query-specific relevance measures with virtually no overhead. In our experiments significant improvements in the precision of search results were observed, which demonstrate the eectiveness of our model.}, }
Endnote
%0 Thesis %A Luxenburger, Julia %Y Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Query-log based Authority Analysis for Web Information Search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F7F6-1 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master %X The ongoing explosion of web information calls for more intelligent and personalized methods towards better search result quality for advanced queries. Query logs and click streams obtained from web browsers or search engines can contribute to better quality by exploiting the collaborative recommendations that are implicitly embedded in this information. The method presented in this work incorporates the notion of query nodes into the PageRank model and integrates the implicit relevance feedback given by click streams into the automated process of authority analysis. The enhanced PageRank scores, coined QRank scores, can be computed oine; at query-time they are combined with query-specific relevance measures with virtually no overhead. In our experiments significant improvements in the precision of search results were observed, which demonstrate the eectiveness of our model.
[153]
K. Shi, “Extracting the Topological Structure of the Higher Order Critical points for 3D Vector Fields,” Universität des Saarlandes, Saarbrücken, 2004.
Abstract
Critical points of vector fields are important topological features, which are characterized by the number and order of areas of different flow behavior around it. We present an approach to detect the different sectors around general critical points of 3D vector fields. This approach is based on a piecewise linear approximation of the vector fields around the critical points. We showed examples how this approach can also treat critical points of a higher order, and we discussed the limitation of the approach as well.
Export
BibTeX
@mastersthesis{Kuangyu2004, TITLE = {Extracting the Topological Structure of the Higher Order Critical points for {3D} Vector Fields}, AUTHOR = {Shi, Kuangyu}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Critical points of vector fields are important topological features, which are characterized by the number and order of areas of different flow behavior around it. We present an approach to detect the different sectors around general critical points of 3D vector fields. This approach is based on a piecewise linear approximation of the vector fields around the critical points. We showed examples how this approach can also treat critical points of a higher order, and we discussed the limitation of the approach as well.}, }
Endnote
%0 Thesis %A Shi, Kuangyu %Y Theisel, Holger %Y Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Extracting the Topological Structure of the Higher Order Critical points for 3D Vector Fields : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F7F9-C %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master %X Critical points of vector fields are important topological features, which are characterized by the number and order of areas of different flow behavior around it. We present an approach to detect the different sectors around general critical points of 3D vector fields. This approach is based on a piecewise linear approximation of the vector fields around the critical points. We showed examples how this approach can also treat critical points of a higher order, and we discussed the limitation of the approach as well.
[154]
E. Stegantova, “Multicommodity Flows Over time with Costs,” Universität des Saarlandes, Saarbrücken, 2004.
Abstract
Flows over time (dymanic flows) generalize standard network flows by introducing a new element- time. They naturally model problems where travel and transmission are not instantaneous. I this work we consider two dynamic flows problems: The Quickest multicommodity Dynamic Flow problem with Bounded Cost (QMDFP) ant The Maximal Multicommodity dynamic flow problem (MMDFP). Both problems are known to be NP-hard. In the first part we propose two methods of improving the result obtained by the efficient two-approximation algorithm proposed by Lisa Fleischer and Martin Skutella for solving the QMDFP. The Approximation algorithm constructs the temporally repeated flow using so called "static average flow". In the first method we prove that the value of the static average flow can be increased by a factor, that depends on the length of th shortest path form a source to a sink in the underlying network. Increasing the value of the static average flow allows us to save time on sending the necessary amount of flow (the given demands) from sources to sinks. The cost of the resulting temporally repeated flow remains unchanged. In the second method we porpose an algorithm that reconstructs the static average flow in the way that the length of the longest path used by the flow becomes shorter. This allows us to wait for a shorter period of time until the last sent unit of flow reaches its sink. The drawback of the reconstructing of the flow is its increase in cost. But we give a proof ot the fact that the cost increases at most by a factor of two. In the second part of the thesis we deal with MMDFP. We give an instance of the network that demonstrates that the optimal solution is not always a temporally repeated flow. But we give an easy proof of the fact that the difference between the optimal solution and the Maximal Multicommodity Temporally Repeated Flow is bounded by a constant that depends on the network and not on the given time horizon. This fact allows to approximate the optimal Maximal Multicommodity Dynamic Flow with the Maximal Muticommodity Temporally Repeated Flow for large enough time horizons.
Export
BibTeX
@mastersthesis{Stegantova2004, TITLE = {Multicommodity Flows Over time with Costs}, AUTHOR = {Stegantova, Evghenia}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {Flows over time (dymanic flows) generalize standard network flows by introducing a new element- time. They naturally model problems where travel and transmission are not instantaneous. I this work we consider two dynamic flows problems: The Quickest multicommodity Dynamic Flow problem with Bounded Cost (QMDFP) ant The Maximal Multicommodity dynamic flow problem (MMDFP). Both problems are known to be NP-hard. In the first part we propose two methods of improving the result obtained by the efficient two-approximation algorithm proposed by Lisa Fleischer and Martin Skutella for solving the QMDFP. The Approximation algorithm constructs the temporally repeated flow using so called "static average flow". In the first method we prove that the value of the static average flow can be increased by a factor, that depends on the length of th shortest path form a source to a sink in the underlying network. Increasing the value of the static average flow allows us to save time on sending the necessary amount of flow (the given demands) from sources to sinks. The cost of the resulting temporally repeated flow remains unchanged. In the second method we porpose an algorithm that reconstructs the static average flow in the way that the length of the longest path used by the flow becomes shorter. This allows us to wait for a shorter period of time until the last sent unit of flow reaches its sink. The drawback of the reconstructing of the flow is its increase in cost. But we give a proof ot the fact that the cost increases at most by a factor of two. In the second part of the thesis we deal with MMDFP. We give an instance of the network that demonstrates that the optimal solution is not always a temporally repeated flow. But we give an easy proof of the fact that the difference between the optimal solution and the Maximal Multicommodity Temporally Repeated Flow is bounded by a constant that depends on the network and not on the given time horizon. This fact allows to approximate the optimal Maximal Multicommodity Dynamic Flow with the Maximal Muticommodity Temporally Repeated Flow for large enough time horizons.}, }
Endnote
%0 Thesis %A Stegantova, Evghenia %Y Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Multicommodity Flows Over time with Costs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F7FE-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master %X Flows over time (dymanic flows) generalize standard network flows by introducing a new element- time. They naturally model problems where travel and transmission are not instantaneous. I this work we consider two dynamic flows problems: The Quickest multicommodity Dynamic Flow problem with Bounded Cost (QMDFP) ant The Maximal Multicommodity dynamic flow problem (MMDFP). Both problems are known to be NP-hard. In the first part we propose two methods of improving the result obtained by the efficient two-approximation algorithm proposed by Lisa Fleischer and Martin Skutella for solving the QMDFP. The Approximation algorithm constructs the temporally repeated flow using so called "static average flow". In the first method we prove that the value of the static average flow can be increased by a factor, that depends on the length of th shortest path form a source to a sink in the underlying network. Increasing the value of the static average flow allows us to save time on sending the necessary amount of flow (the given demands) from sources to sinks. The cost of the resulting temporally repeated flow remains unchanged. In the second method we porpose an algorithm that reconstructs the static average flow in the way that the length of the longest path used by the flow becomes shorter. This allows us to wait for a shorter period of time until the last sent unit of flow reaches its sink. The drawback of the reconstructing of the flow is its increase in cost. But we give a proof ot the fact that the cost increases at most by a factor of two. In the second part of the thesis we deal with MMDFP. We give an instance of the network that demonstrates that the optimal solution is not always a temporally repeated flow. But we give an easy proof of the fact that the difference between the optimal solution and the Maximal Multicommodity Temporally Repeated Flow is bounded by a constant that depends on the network and not on the given time horizon. This fact allows to approximate the optimal Maximal Multicommodity Dynamic Flow with the Maximal Muticommodity Temporally Repeated Flow for large enough time horizons.
[155]
I. Trajkovski, “Analysis of Protein Binding Pocket Flexibility,” Universität des Saarlandes, Saarbrücken, 2004.
Export
BibTeX
@mastersthesis{Trajkovski2004, TITLE = {Analysis of Protein Binding Pocket Flexibility}, AUTHOR = {Trajkovski, Igor}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125673F004B2D7B-FE9E2234859B9C8BC1256FBD005A29E8-Trajkovski2004}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, }
Endnote
%0 Thesis %A Trajkovski, Igor %Y Antes, Iris %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Analysis of Protein Binding Pocket Flexibility : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2A1B-4 %F EDOC: 232004 %F OTHER: Local-ID: C125673F004B2D7B-FE9E2234859B9C8BC1256FBD005A29E8-Trajkovski2004 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master
[156]
H. Yu, “Importance Sampling in Photon Tracing,” Universität des Saarlandes, Saarbrücken, 2004.
Abstract
All global illumination algorithms are based on rendering equation. The rendering equation is solved in different ways in every algorithm. Most of algorithms solve the equation by using Monte Carlo method. In this process many samples are produced. These samples have different contribution to generated image. If one hopes to get acceptable result with fewer samples, important samples, which have more contribution for the nal image, must be considered in the rst place. For example, in ordinary Light Tracing, millions of photons have to be traced in order to obtain the distribution of illumination in the whole scene. Actually only a part of scene can be observed most of the time, and just photons hitting visible surfaces will contribute to the generated image. If only a small part of entire scene is visible, we will spend most of the time tracing and storing unimportant photons that have no any contribution to the nal image. Even considering only visible photons, one can see that their contribution to image is very different. Surfaces that are located closer to viewpoint have larger image plane projected area and thus require more photons to achieve the same noise level as surfaces located further away. Orientation of surface in respect to view direction also affects viewdependent photons importance. Depending on the application and used Monte Carlo algorithm one can come up with many other different criteria to compute this importance, which may dramatically affect the quality of produced images and computation speed. Algorithm presented in the thesis takes only useful (visible) photons into account, concentrating computation only on the surfaces visible by currently active camera, balancing the distribution of photons on the image plane, greatly improving the image quality. Using this concept, we can get better result with fewer photons. In this way it is possible to save not only rendering time, but also storage space because less photons need to be stored. This idea also can be applied in other algorithms where millions of samples have to be generated. Once the difference among these samples is found out, we can pay more attention to the important samples that have more contribution to the result image, while ignoring less important ones, thus using fewer samples to get better result.
Export
BibTeX
@mastersthesis{YuHang:ISPT:2004, TITLE = {Importance Sampling in Photon Tracing}, AUTHOR = {Yu, Hang}, LANGUAGE = {eng}, LOCALID = {Local-ID: C125675300671F7B-BDC02E96AF19A112C1256F870042CA3A-YuHang:ISPT:2004}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {All global illumination algorithms are based on rendering equation. The rendering equation is solved in different ways in every algorithm. Most of algorithms solve the equation by using Monte Carlo method. In this process many samples are produced. These samples have different contribution to generated image. If one hopes to get acceptable result with fewer samples, important samples, which have more contribution for the nal image, must be considered in the rst place. For example, in ordinary Light Tracing, millions of photons have to be traced in order to obtain the distribution of illumination in the whole scene. Actually only a part of scene can be observed most of the time, and just photons hitting visible surfaces will contribute to the generated image. If only a small part of entire scene is visible, we will spend most of the time tracing and storing unimportant photons that have no any contribution to the nal image. Even considering only visible photons, one can see that their contribution to image is very different. Surfaces that are located closer to viewpoint have larger image plane projected area and thus require more photons to achieve the same noise level as surfaces located further away. Orientation of surface in respect to view direction also affects viewdependent photons importance. Depending on the application and used Monte Carlo algorithm one can come up with many other different criteria to compute this importance, which may dramatically affect the quality of produced images and computation speed. Algorithm presented in the thesis takes only useful (visible) photons into account, concentrating computation only on the surfaces visible by currently active camera, balancing the distribution of photons on the image plane, greatly improving the image quality. Using this concept, we can get better result with fewer photons. In this way it is possible to save not only rendering time, but also storage space because less photons need to be stored. This idea also can be applied in other algorithms where millions of samples have to be generated. Once the difference among these samples is found out, we can pay more attention to the important samples that have more contribution to the result image, while ignoring less important ones, thus using fewer samples to get better result.}, }
Endnote
%0 Thesis %A Yu, Hang %Y Dmitriev, Kirill Alexandrovich %A referee: Seidel, Hans-Peter %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Importance Sampling in Photon Tracing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2ABF-5 %F EDOC: 231913 %F OTHER: Local-ID: C125675300671F7B-BDC02E96AF19A112C1256F870042CA3A-YuHang:ISPT:2004 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2004 %V master %9 master %X All global illumination algorithms are based on rendering equation. The rendering equation is solved in different ways in every algorithm. Most of algorithms solve the equation by using Monte Carlo method. In this process many samples are produced. These samples have different contribution to generated image. If one hopes to get acceptable result with fewer samples, important samples, which have more contribution for the nal image, must be considered in the rst place. For example, in ordinary Light Tracing, millions of photons have to be traced in order to obtain the distribution of illumination in the whole scene. Actually only a part of scene can be observed most of the time, and just photons hitting visible surfaces will contribute to the generated image. If only a small part of entire scene is visible, we will spend most of the time tracing and storing unimportant photons that have no any contribution to the nal image. Even considering only visible photons, one can see that their contribution to image is very different. Surfaces that are located closer to viewpoint have larger image plane projected area and thus require more photons to achieve the same noise level as surfaces located further away. Orientation of surface in respect to view direction also affects viewdependent photons importance. Depending on the application and used Monte Carlo algorithm one can come up with many other different criteria to compute this importance, which may dramatically affect the quality of produced images and computation speed. Algorithm presented in the thesis takes only useful (visible) photons into account, concentrating computation only on the surfaces visible by currently active camera, balancing the distribution of photons on the image plane, greatly improving the image quality. Using this concept, we can get better result with fewer photons. In this way it is possible to save not only rendering time, but also storage space because less photons need to be stored. This idea also can be applied in other algorithms where millions of samples have to be generated. Once the difference among these samples is found out, we can pay more attention to the important samples that have more contribution to the result image, while ignoring less important ones, thus using fewer samples to get better result.
2003
[157]
E. de Aguiar, “Character Animation from a Motion Capture Database,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
This thesis discusses methods that use information contained in a motion capture database to assist in the creation of a realistic character animation. Starting with an animation sketch, where only a small number of keyframes for some degrees of freedom are set, the motion capture data is used to improve the initial motion quality. First, the multiresolution filtering technique is presented and it is shown how this method can be used as a building block for character animation. Then, the hierarchical fragment method is introduced, which uses signal processing techniques, the skeleton hierarchy information and a simple matching algorithm applied to data fragments to synthesize missing degrees of freedom in a character animation, from a motion capture database. In a third technique, a principal component model is fitted to the motion capture database and it is demonstrated that using the motion principle components a character animation can be edited and enhanced after it has been created. After comparing these methods, a hybrid approach combining the individual technique s advantages is proposed, which uses a pipeline in order to create the character animation in a simple and intuitive way. Finally, the methods and results are reviewed and approaches for future improvements are mentioned.
Export
BibTeX
@mastersthesis{Aguiar2003, TITLE = {Character Animation from a Motion Capture Database}, AUTHOR = {de Aguiar, Edilson}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {This thesis discusses methods that use information contained in a motion capture database to assist in the creation of a realistic character animation. Starting with an animation sketch, where only a small number of keyframes for some degrees of freedom are set, the motion capture data is used to improve the initial motion quality. First, the multiresolution filtering technique is presented and it is shown how this method can be used as a building block for character animation. Then, the hierarchical fragment method is introduced, which uses signal processing techniques, the skeleton hierarchy information and a simple matching algorithm applied to data fragments to synthesize missing degrees of freedom in a character animation, from a motion capture database. In a third technique, a principal component model is fitted to the motion capture database and it is demonstrated that using the motion principle components a character animation can be edited and enhanced after it has been created. After comparing these methods, a hybrid approach combining the individual technique s advantages is proposed, which uses a pipeline in order to create the character animation in a simple and intuitive way. Finally, the methods and results are reviewed and approaches for future improvements are mentioned.}, }
Endnote
%0 Thesis %A de Aguiar, Edilson %Y Theobalt, Christian %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Character Animation from a Motion Capture Database : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F80B-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X This thesis discusses methods that use information contained in a motion capture database to assist in the creation of a realistic character animation. Starting with an animation sketch, where only a small number of keyframes for some degrees of freedom are set, the motion capture data is used to improve the initial motion quality. First, the multiresolution filtering technique is presented and it is shown how this method can be used as a building block for character animation. Then, the hierarchical fragment method is introduced, which uses signal processing techniques, the skeleton hierarchy information and a simple matching algorithm applied to data fragments to synthesize missing degrees of freedom in a character animation, from a motion capture database. In a third technique, a principal component model is fitted to the motion capture database and it is demonstrated that using the motion principle components a character animation can be edited and enhanced after it has been created. After comparing these methods, a hybrid approach combining the individual technique s advantages is proposed, which uses a pipeline in order to create the character animation in a simple and intuitive way. Finally, the methods and results are reviewed and approaches for future improvements are mentioned.
[158]
W. Ding, “Geometric Rounding without changing the Topology,” Universität des Saarlandes, Saarbrücken, 2003.
Export
BibTeX
@mastersthesis{Dipl03/Ding, TITLE = {Geometric Rounding without changing the Topology}, AUTHOR = {Ding, Wei}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256428004B93B8-0CA4F6A423444329C1256E1A00470230-Dipl03/Ding}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, }
Endnote
%0 Thesis %A Ding, Wei %Y Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Geometric Rounding without changing the Topology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2D16-5 %F EDOC: 201948 %F OTHER: Local-ID: C1256428004B93B8-0CA4F6A423444329C1256E1A00470230-Dipl03/Ding %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master
[159]
K. Kaligosi, “Length bounded Network Flows,” Universität des Saarlandes, Saarbrücken, 2003.
Export
BibTeX
@mastersthesis{Kaligossi/Master03, TITLE = {Length bounded Network Flows}, AUTHOR = {Kaligosi, Kanela}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256428004B93B8-441A4C4507CE380AC1256E1D004AE6F4-Kaligossi/Master03}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, }
Endnote
%0 Thesis %A Kaligosi, Kanela %Y Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Length bounded Network Flows : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2D64-8 %F EDOC: 201958 %F OTHER: Local-ID: C1256428004B93B8-441A4C4507CE380AC1256E1D004AE6F4-Kaligossi/Master03 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master
[160]
M. S. Kiraz, “Formalization and Verification of Informal Security Protocol Description,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
Conclusions: In this master thesis, we have started with an informal security protocol representation. We have demonstrated the translation of protocols into Horn clauses by giving the well-known Otway Rees Protocol as an example. We also defined and formalized semantics of the protocol for all participant. For the work presented in this thesis we have assumed perfect encryption. We also assume that the protocol is executed in the presence of the attacker that can listen, compute new messages from the messages it has already received, and send any message it can build. We firmalized the abilities of attacker and we defined the view of attacker to the message. By looking to the view of the messages, if participant can distiguish the views then it will stop the protocol run, if participant cannot distinguish the messages from each other then it will reply to the previous message. The related work has been done in the reference [5] for CAPSL ( Common Authentication Protocol Specification Language) wich is a high-level language for applying formal methods to the security analysis of cryptographic protocols. Protocol is specified in a form that could be used as the input format for any formal analysis.
Export
BibTeX
@mastersthesis{Kiraz2003, TITLE = {Formalization and Verification of Informal Security Protocol Description}, AUTHOR = {Kiraz, Mehmet Sabir}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Conclusions: In this master thesis, we have started with an informal security protocol representation. We have demonstrated the translation of protocols into Horn clauses by giving the well-known Otway Rees Protocol as an example. We also defined and formalized semantics of the protocol for all participant. For the work presented in this thesis we have assumed perfect encryption. We also assume that the protocol is executed in the presence of the attacker that can listen, compute new messages from the messages it has already received, and send any message it can build. We firmalized the abilities of attacker and we defined the view of attacker to the message. By looking to the view of the messages, if participant can distiguish the views then it will stop the protocol run, if participant cannot distinguish the messages from each other then it will reply to the previous message. The related work has been done in the reference [5] for CAPSL ( Common Authentication Protocol Specification Language) wich is a high-level language for applying formal methods to the security analysis of cryptographic protocols. Protocol is specified in a form that could be used as the input format for any formal analysis.}, }
Endnote
%0 Thesis %A Kiraz, Mehmet Sabir %Y Blanchet, Bruno %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Static Analysis, MPI for Informatics, Max Planck Society %T Formalization and Verification of Informal Security Protocol Description : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F819-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X Conclusions: In this master thesis, we have started with an informal security protocol representation. We have demonstrated the translation of protocols into Horn clauses by giving the well-known Otway Rees Protocol as an example. We also defined and formalized semantics of the protocol for all participant. For the work presented in this thesis we have assumed perfect encryption. We also assume that the protocol is executed in the presence of the attacker that can listen, compute new messages from the messages it has already received, and send any message it can build. We firmalized the abilities of attacker and we defined the view of attacker to the message. By looking to the view of the messages, if participant can distiguish the views then it will stop the protocol run, if participant cannot distinguish the messages from each other then it will reply to the previous message. The related work has been done in the reference [5] for CAPSL ( Common Authentication Protocol Specification Language) wich is a high-level language for applying formal methods to the security analysis of cryptographic protocols. Protocol is specified in a form that could be used as the input format for any formal analysis.
[161]
R. Kumar, “Proving Program Termination via Transition Invariants,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
We can prove termination of C programs by computing 'strong enough' transition invariants by abastract interpretation. In this thesis, we describe basic ingredients for an implementation of this computation. Namely, we show how to extract models form C programs (using GCA tool [7]) and how to construct an abstract domain of transition predicates. Furthermore, we propose a method for 'compacting' a model that improves the running time of the transtion invariant generation algorithm. We implement these ingredients and the proposed optimization, and practically evaluate their effectiveness
Export
BibTeX
@mastersthesis{Kumar2003, TITLE = {Proving Program Termination via Transition Invariants}, AUTHOR = {Kumar, Ramesh}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We can prove termination of C programs by computing 'strong enough' transition invariants by abastract interpretation. In this thesis, we describe basic ingredients for an implementation of this computation. Namely, we show how to extract models form C programs (using GCA tool [7]) and how to construct an abstract domain of transition predicates. Furthermore, we propose a method for 'compacting' a model that improves the running time of the transtion invariant generation algorithm. We implement these ingredients and the proposed optimization, and practically evaluate their effectiveness}, }
Endnote
%0 Thesis %A Kumar, Ramesh %Y Podelski, Andreas %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society %T Proving Program Termination via Transition Invariants : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F81B-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X We can prove termination of C programs by computing 'strong enough' transition invariants by abastract interpretation. In this thesis, we describe basic ingredients for an implementation of this computation. Namely, we show how to extract models form C programs (using GCA tool [7]) and how to construct an abstract domain of transition predicates. Furthermore, we propose a method for 'compacting' a model that improves the running time of the transtion invariant generation algorithm. We implement these ingredients and the proposed optimization, and practically evaluate their effectiveness
[162]
C. Madrigal Mora, “Client Framework for the REAL Smart Door Displays,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
Technological advances have permitted computers to incur into barely every aspect of our daily lives. Every day there are more services available to us through a varied array of devices. Mobile phones, portable video games, digital agendas provide means to make our every day life easier. As technology improves these devices will become smaller and more powerful, eventually ubiquitous computing will cause the merge of computers with our surroundings: office furniture, windows and doors. The current work presents the development of the client side of a framework for the study of the human interaction with such environment-embedded devices. The environment for such an interaction is the Chair of Artificila Intelligence in Saarland University, and the devices are Pocket Digital Assistants embedded next to each office's door and they function as embedded displays. The displays provide information about the office occupants, their schedules and allow visitors to leave voice and written messages or request appointments. These diplays also take part into a pedestrian navigation system effort that forms part of Project REAL [BKW02]. The displays provide users with a variety of information, for instance, way directions to aid in the indoor navigation task. Our intention is that over a longer period of time new applications and services are provided through these embedded displays and enable further research on human machine interaction.
Export
BibTeX
@mastersthesis{MadrigalMora2003, TITLE = {Client Framework for the {REAL} Smart Door Displays}, AUTHOR = {Madrigal Mora, Cristi{\'a}n}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Technological advances have permitted computers to incur into barely every aspect of our daily lives. Every day there are more services available to us through a varied array of devices. Mobile phones, portable video games, digital agendas provide means to make our every day life easier. As technology improves these devices will become smaller and more powerful, eventually ubiquitous computing will cause the merge of computers with our surroundings: office furniture, windows and doors. The current work presents the development of the client side of a framework for the study of the human interaction with such environment-embedded devices. The environment for such an interaction is the Chair of Artificila Intelligence in Saarland University, and the devices are Pocket Digital Assistants embedded next to each office's door and they function as embedded displays. The displays provide information about the office occupants, their schedules and allow visitors to leave voice and written messages or request appointments. These diplays also take part into a pedestrian navigation system effort that forms part of Project REAL [BKW02]. The displays provide users with a variety of information, for instance, way directions to aid in the indoor navigation task. Our intention is that over a longer period of time new applications and services are provided through these embedded displays and enable further research on human machine interaction.}, }
Endnote
%0 Thesis %A Madrigal Mora, Cristi&#225;n %Y Wahlster, Wolfgang %A referee: Kr&#252;ger, Antonio %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Client Framework for the REAL Smart Door Displays : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F821-8 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X Technological advances have permitted computers to incur into barely every aspect of our daily lives. Every day there are more services available to us through a varied array of devices. Mobile phones, portable video games, digital agendas provide means to make our every day life easier. As technology improves these devices will become smaller and more powerful, eventually ubiquitous computing will cause the merge of computers with our surroundings: office furniture, windows and doors. The current work presents the development of the client side of a framework for the study of the human interaction with such environment-embedded devices. The environment for such an interaction is the Chair of Artificila Intelligence in Saarland University, and the devices are Pocket Digital Assistants embedded next to each office's door and they function as embedded displays. The displays provide information about the office occupants, their schedules and allow visitors to leave voice and written messages or request appointments. These diplays also take part into a pedestrian navigation system effort that forms part of Project REAL [BKW02]. The displays provide users with a variety of information, for instance, way directions to aid in the indoor navigation task. Our intention is that over a longer period of time new applications and services are provided through these embedded displays and enable further research on human machine interaction.
[163]
P. McCabe, “Lower Bounding the Number of Straight-Edge Triangulations of Planar Point Sets,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
We examine the number of triangulations that any set of n points in the plane must have, and prove that (i) any set of n points has at least 0.00037*2.2n triangulations, (ii) any set with three extreme points and n interior points has at least 0.112*2.569n triangulation, and (iii) any set with n interior points has at least 0.238 * 2.38n triangulation. The best previously known lower bound for the number of triangulations for n points in the plane is 0.0822 * 2.0129n. We also give a method of automatically extending known bounds for small point sets to general lower bounds.
Export
BibTeX
@mastersthesis{McCabe2003, TITLE = {Lower Bounding the Number of Straight-Edge Triangulations of Planar Point Sets}, AUTHOR = {McCabe, Paul}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We examine the number of triangulations that any set of n points in the plane must have, and prove that (i) any set of n points has at least 0.00037*2.2n triangulations, (ii) any set with three extreme points and n interior points has at least 0.112*2.569n triangulation, and (iii) any set with n interior points has at least 0.238 * 2.38n triangulation. The best previously known lower bound for the number of triangulations for n points in the plane is 0.0822 * 2.0129n. We also give a method of automatically extending known bounds for small point sets to general lower bounds.}, }
Endnote
%0 Thesis %A McCabe, Paul %Y Seidel, Raimund %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Lower Bounding the Number of Straight-Edge Triangulations of Planar Point Sets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F823-4 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X We examine the number of triangulations that any set of n points in the plane must have, and prove that (i) any set of n points has at least 0.00037*2.2n triangulations, (ii) any set with three extreme points and n interior points has at least 0.112*2.569n triangulation, and (iii) any set with n interior points has at least 0.238 * 2.38n triangulation. The best previously known lower bound for the number of triangulations for n points in the plane is 0.0822 * 2.0129n. We also give a method of automatically extending known bounds for small point sets to general lower bounds.
[164]
J. L. Schoner, “Interactive Haptics and Display for Viscoelastic Solids,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
This thesis describes a way of modeling viscoelasticity in deformable solids for interactive haptic and display purposes. The model is based on elastostatic deformation characterized by a discrete Green's function matrix, which is extended using a viscoelastic add-on. The underlying idea is to replace the linear, time-independent relationships between the matrix entries, force, and displacement with non-linear, time-dependent relationships. A general framework is given, in which different viscoelastic models can be devised. One such model based on physical measurements is discussed in detail. Finally, the details and results of a system that implements this model are described.
Export
BibTeX
@mastersthesis{Schoner2003, TITLE = {Interactive Haptics and Display for Viscoelastic Solids}, AUTHOR = {Schoner, Jeffrey Lawrence}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {This thesis describes a way of modeling viscoelasticity in deformable solids for interactive haptic and display purposes. The model is based on elastostatic deformation characterized by a discrete Green's function matrix, which is extended using a viscoelastic add-on. The underlying idea is to replace the linear, time-independent relationships between the matrix entries, force, and displacement with non-linear, time-dependent relationships. A general framework is given, in which different viscoelastic models can be devised. One such model based on physical measurements is discussed in detail. Finally, the details and results of a system that implements this model are described.}, }
Endnote
%0 Thesis %A Schoner, Jeffrey Lawrence %Y Lang, Jochen %A referee: Seidel, Hans-Peter %+ International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T Interactive Haptics and Display for Viscoelastic Solids : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F859-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X This thesis describes a way of modeling viscoelasticity in deformable solids for interactive haptic and display purposes. The model is based on elastostatic deformation characterized by a discrete Green's function matrix, which is extended using a viscoelastic add-on. The underlying idea is to replace the linear, time-independent relationships between the matrix entries, force, and displacement with non-linear, time-dependent relationships. A general framework is given, in which different viscoelastic models can be devised. One such model based on physical measurements is discussed in detail. Finally, the details and results of a system that implements this model are described.
[165]
H. Tiwary, “Orthogonal Range Searching,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
Orthogonal range searches arise in many areas of application, most often, in database queries. Many techniques have been developed for this problem and related geometric search problems where the points are replaced by general objects like simplices, discs etc. This report is a brief study of some of the techniques developed and how they can be plugged together to give various solutions for this problem. The motivation for this study was to find a general method to solve this problem in time O(log n) and space O (npolylog n) where the dependency on the dimension affects only the constant in the query time and the polylogarithmic factor in the space complexity. The study hasn't led to any such algorithm so far but it helps us to see the common thread in some of the structures and tools available in the study of algorithms. Interestingly, some of these structures and tools were proposed for quite other purposes than answering Orthogonal Range Queries. A small part of the thesis also deals with Dominance Queries because of their close relation to the Orthogonal Range Queries in terms of both problem definition and solution.
Export
BibTeX
@mastersthesis{Tiwary2003, TITLE = {Orthogonal Range Searching}, AUTHOR = {Tiwary, Hansraj}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Orthogonal range searches arise in many areas of application, most often, in database queries. Many techniques have been developed for this problem and related geometric search problems where the points are replaced by general objects like simplices, discs etc. This report is a brief study of some of the techniques developed and how they can be plugged together to give various solutions for this problem. The motivation for this study was to find a general method to solve this problem in time O(log n) and space O (npolylog n) where the dependency on the dimension affects only the constant in the query time and the polylogarithmic factor in the space complexity. The study hasn't led to any such algorithm so far but it helps us to see the common thread in some of the structures and tools available in the study of algorithms. Interestingly, some of these structures and tools were proposed for quite other purposes than answering Orthogonal Range Queries. A small part of the thesis also deals with Dominance Queries because of their close relation to the Orthogonal Range Queries in terms of both problem definition and solution.}, }
Endnote
%0 Thesis %A Tiwary, Hansraj %Y Seidel, Reimund %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations %T Orthogonal Range Searching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F85D-6 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X Orthogonal range searches arise in many areas of application, most often, in database queries. Many techniques have been developed for this problem and related geometric search problems where the points are replaced by general objects like simplices, discs etc. This report is a brief study of some of the techniques developed and how they can be plugged together to give various solutions for this problem. The motivation for this study was to find a general method to solve this problem in time O(log n) and space O (npolylog n) where the dependency on the dimension affects only the constant in the query time and the polylogarithmic factor in the space complexity. The study hasn't led to any such algorithm so far but it helps us to see the common thread in some of the structures and tools available in the study of algorithms. Interestingly, some of these structures and tools were proposed for quite other purposes than answering Orthogonal Range Queries. A small part of the thesis also deals with Dominance Queries because of their close relation to the Orthogonal Range Queries in terms of both problem definition and solution.
[166]
S. Tverdyshev, “Documentation and Modelling of the IPC Mechanism in the L4 Kernel,” Universität des Saarlandes, Saarbrücken, 2003.
Abstract
Summary My diploma thesis is a part of the project "Verification of the L4 operating system kernel". The aim of this project ist to formally verify the L4 kernel in order to guarantee its correctness. In my thesis I have documented the implementation of the IPC mechanism in the L4 kernel. I proved the correctness of the message passing protocol of the IPC mechanism. This was done in three steps: At first I have created the formal specification of that protocol. After that I built a model of the IPC mechanism in PVS and at last I proved that the created model fulfils the specification. The model which is built in this work is not meant to be a precise representation of the original protocol. The proof of such a model is not the same as proofs for the original C code, but they may allow to write a correct source code. Therefore for software verification we need some tool that we can work with C-code or any programming language in order to verify its correctness.
Export
BibTeX
@mastersthesis{Tverdyshev2003, TITLE = {Documentation and Modelling of the {IPC} Mechanism in the {L4} Kernel}, AUTHOR = {Tverdyshev, Sergey}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Summary My diploma thesis is a part of the project "Verification of the L4 operating system kernel". The aim of this project ist to formally verify the L4 kernel in order to guarantee its correctness. In my thesis I have documented the implementation of the IPC mechanism in the L4 kernel. I proved the correctness of the message passing protocol of the IPC mechanism. This was done in three steps: At first I have created the formal specification of that protocol. After that I built a model of the IPC mechanism in PVS and at last I proved that the created model fulfils the specification. The model which is built in this work is not meant to be a precise representation of the original protocol. The proof of such a model is not the same as proofs for the original C code, but they may allow to write a correct source code. Therefore for software verification we need some tool that we can work with C-code or any programming language in order to verify its correctness.}, }
Endnote
%0 Thesis %A Tverdyshev, Sergey %Y Paul, Wolfgang %A referee: Hermann, H. %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Documentation and Modelling of the IPC Mechanism in the L4 Kernel : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F863-5 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2003 %V master %9 master %X Summary My diploma thesis is a part of the project "Verification of the L4 operating system kernel". The aim of this project ist to formally verify the L4 kernel in order to guarantee its correctness. In my thesis I have documented the implementation of the IPC mechanism in the L4 kernel. I proved the correctness of the message passing protocol of the IPC mechanism. This was done in three steps: At first I have created the formal specification of that protocol. After that I built a model of the IPC mechanism in PVS and at last I proved that the created model fulfils the specification. The model which is built in this work is not meant to be a precise representation of the original protocol. The proof of such a model is not the same as proofs for the original C code, but they may allow to write a correct source code. Therefore for software verification we need some tool that we can work with C-code or any programming language in order to verify its correctness.
2002
[167]
D. Michail, “Lobster- A Load Balanced P2P Content Sharing Network,” Universität des Saarlandes, Saarbrücken, 2002.
Abstract
The unpredictable growth of the Internet community as well as the size of information available, have overwhelmed the traditional models of distributed computing. Client/server computing seems unable to cope with the constantly increasing need for larger systems. The peer-to-peer (P2P) model, although originally conceived much earlier, has recently emerged as a new way to create distributed environments. The concept of peers which play both client and server roles seems vera attractive, especially in respect to scalability issues. In this text, we shall present Lobster, a scalable P2P content sharing system which provides two fundamental properties, load balancing and short query response times. Our work includes the design as well as the implementation of the system in the Java programming language.
Export
BibTeX
@mastersthesis{Michail2002, TITLE = {Lobster- A Load Balanced {P2P} Content Sharing Network}, AUTHOR = {Michail, Dimitrios}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {The unpredictable growth of the Internet community as well as the size of information available, have overwhelmed the traditional models of distributed computing. Client/server computing seems unable to cope with the constantly increasing need for larger systems. The peer-to-peer (P2P) model, although originally conceived much earlier, has recently emerged as a new way to create distributed environments. The concept of peers which play both client and server roles seems vera attractive, especially in respect to scalability issues. In this text, we shall present Lobster, a scalable P2P content sharing system which provides two fundamental properties, load balancing and short query response times. Our work includes the design as well as the implementation of the system in the Java programming language.}, }
Endnote
%0 Thesis %A Michail, Dimitrios %Y Koubarakis, Manolis %A referee: Petrakis, Euripides %A referee: Triantafillou, Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Lobster- A Load Balanced P2P Content Sharing Network : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-D1AA-E %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2002 %V master %9 master %X The unpredictable growth of the Internet community as well as the size of information available, have overwhelmed the traditional models of distributed computing. Client/server computing seems unable to cope with the constantly increasing need for larger systems. The peer-to-peer (P2P) model, although originally conceived much earlier, has recently emerged as a new way to create distributed environments. The concept of peers which play both client and server roles seems vera attractive, especially in respect to scalability issues. In this text, we shall present Lobster, a scalable P2P content sharing system which provides two fundamental properties, load balancing and short query response times. Our work includes the design as well as the implementation of the system in the Java programming language.
[168]
A. Rybalchenko, “A Model Checker based on Abstraction Refinement,” Universität des Saarlandes, Saarbrücken, 2002.
Abstract
Abstraction plays an important role for verification of computer programs. We want to construct the right abstraction automatically. There is a promising approach to do it, called {\it predicate abstraction}. An insufficiently precise abstraction can be {\it automatically refined}. There is an automated model checking method described in [Ball, Podelski, Rajamani TACAS02] which combines both techniques, e.g., predicate abstraction and abstraction refinement. The quality of the method is expressed by a completeness property relative to a powerful but unrealistic oracle-guided algorithm. \par In this work we want to generalize the results from [Ball,Podelski,Rajamani TACAS02] and introduce new abstraction functions with different precision. We implement the new abstraction functions in a model checker and practically evaluate their effectiveness in verifying various computer programs.
Export
BibTeX
@mastersthesis{Rybalchenko2002, TITLE = {A Model Checker based on Abstraction Refinement}, AUTHOR = {Rybalchenko, Andrey}, LANGUAGE = {eng}, LOCALID = {Local-ID: C1256104005ECAFC-E198F9F27C896D72C1256D0A0037AD19-Rybalchenko2002}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {Abstraction plays an important role for verification of computer programs. We want to construct the right abstraction automatically. There is a promising approach to do it, called {\it predicate abstraction}. An insufficiently precise abstraction can be {\it automatically refined}. There is an automated model checking method described in [Ball, Podelski, Rajamani TACAS02] which combines both techniques, e.g., predicate abstraction and abstraction refinement. The quality of the method is expressed by a completeness property relative to a powerful but unrealistic oracle-guided algorithm. \par In this work we want to generalize the results from [Ball,Podelski,Rajamani TACAS02] and introduce new abstraction functions with different precision. We implement the new abstraction functions in a model checker and practically evaluate their effectiveness in verifying various computer programs.}, }
Endnote
%0 Thesis %A Rybalchenko, Andrey %Y Podelski, Andreas %A referee: Wilhelm, Reinhard %+ Programming Logics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Programming Logics, MPI for Informatics, Max Planck Society External Organizations %T A Model Checker based on Abstraction Refinement : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2EF1-F %F EDOC: 202126 %F OTHER: Local-ID: C1256104005ECAFC-E198F9F27C896D72C1256D0A0037AD19-Rybalchenko2002 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2002 %V master %9 master %X Abstraction plays an important role for verification of computer programs. We want to construct the right abstraction automatically. There is a promising approach to do it, called {\it predicate abstraction}. An insufficiently precise abstraction can be {\it automatically refined}. There is an automated model checking method described in [Ball, Podelski, Rajamani TACAS02] which combines both techniques, e.g., predicate abstraction and abstraction refinement. The quality of the method is expressed by a completeness property relative to a powerful but unrealistic oracle-guided algorithm. \par In this work we want to generalize the results from [Ball,Podelski,Rajamani TACAS02] and introduce new abstraction functions with different precision. We implement the new abstraction functions in a model checker and practically evaluate their effectiveness in verifying various computer programs.

PhD Thesis

2017
[1]
P. Danilewski, “ManyDSL One Host for All Language Need,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can “abuse” sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) — all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.
Export
BibTeX
@phdthesis{Danilewskiphd17, TITLE = {Many{DSL} One Host for All Language Need}, AUTHOR = {Danilewski, Piotr}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68840}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can {\textquotedblleft}abuse{\textquotedblright} sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) --- all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.}, }
Endnote
%0 Thesis %A Danilewski, Piotr %Y Slussalek, Philipp %A referee: Reinhard, Wilhelm %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations %T ManyDSL One Host for All Language Need : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-934E-8 %U urn:nbn:de:bsz:291-scidok-68840 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 257 p. %V phd %9 phd %X Languages shape thoughts. This is true for human spoken languages as much as for programming languages. As computers continue to expand their dominance in almost every aspect of our lives, the need to more adequately express new concepts and domains in computer languages arise. However, to evolve our thoughts we need to evolve the languages we speek in. But what tools are there to create and upgrade the computer languages? How can we encourage developers to define their own languages quickly to best match the domains they work in? Nowadays two main approaches exists. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can &#8220;abuse&#8221; sufficiently flexible host languages to embed small domain- specific languages within them. Both approaches have their own respective limitations. Creating standalone languages is a major endeavor. Such languages cannot be combined easily with other languages. Embedding, on the other hand, is limited by the syntax of the host language. Embedded languages, once defined, are always present without clear distinction between them and the host language. When used extensively, it leads to one humungous conglomerate of languages, with confusing syntax and unexpected interactions. In this work we present an alternative: ManyDSL. It is a unique interpreter and compiler taking strength from both approaches, while avoiding the above weaknesses. ManyDSL features its own LL1 parser generator, breaking the limits of the syntax of the host language. The grammar description is given in the same host language as the rest of the program. Portions of the grammar can be parametrized and abstracted into functions, in order to be used in other language definitions. Languages are created on the fly during the interpretation process and may be used to parse selected fragments of the subsequent source files. Similarly to embedded languages, ManyDSL translates all custom languages to the same host language before execution. The host language uses a continuation- passing style approach with a novel, dynamic approach to staging. The staging allows for arbitrary partial evaluation, and executing code at different phases of the compilation process. This can be used to define domain-specific optimiza- tions and auxiliary computation (e.g. for verification) &#8212; all within an entirely functional approach, without any explicit use of abstract syntax trees and code transformations. With the help of ManyDSL a user is able to create new languages with distinct, easily recognizable syntax. Moreover, he is able to define and use many of such languages within a single project. Languages can be switched with a well-defined boundary, enabling their interaction in a clear and controlled way. ManyDSL is meant to be the first step towards a broader language pluralism. With it we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6884/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[2]
S. Dutta, “Efficient knowledge Management for Named Entities from Text,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.
Export
BibTeX
@phdthesis{duttaphd17, TITLE = {Efficient knowledge Management for Named Entities from Text}, AUTHOR = {Dutta, Sourav}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67924}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.}, }
Endnote
%0 Thesis %A Dutta, Sourav %Y Weikum, Gerhard %A referee: Nejdl, Wolfgang %A referee: Berberich, Klaus %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficient knowledge Management for Named Entities from Text : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-A793-E %U urn:nbn:de:bsz:291-scidok-67924 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P xv, 134 p. %V phd %9 phd %X The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6792/
[3]
Y. Gryaditskaya, “High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The “HDR mode” often encountered on such devices, relies on techniques called “exposure fusion” and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.
Export
BibTeX
@phdthesis{Gryphd17, TITLE = {High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing}, AUTHOR = {Gryaditskaya, Yulia}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69296}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The {\textquotedblleft}HDR mode{\textquotedblright} often encountered on such devices, relies on techniques called {\textquotedblleft}exposure fusion{\textquotedblright} and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.}, }
Endnote
%0 Thesis %A Gryaditskaya, Yulia %Y Seidel, Hans-Peter %A referee: Myszkowski, Karol %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society %T High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-ABA6-3 %U urn:nbn:de:bsz:291-scidok-69296 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 88 p. %V phd %9 phd %X Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) image formats, allowing to improve colour gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. It opens wide opportunities for post-processing, which can be useful for artistic rendering, enhancement of viewing experience or medical applications. Simultaneously, light-field scene representation as well is gaining importance, propelled by the recent reappearance of virtual reality, the improvement of both acquisition techniques, and computational and storage capabilities. Light-field data as well allows to achieve a broad range of effects in post-production: among others, it enables a change of a camera position, an aperture or a focal length. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitations of a conventional camera do not allow to capture wide dynamic range. This especially is the case for mobile devices, where small sensors are optimised for capturing in high-resolution. The &#8220;HDR mode&#8221; often encountered on such devices, relies on techniques called &#8220;exposure fusion&#8221; and allows to partially overcome the limited range of a sensor. The HDR video at the same time remains a challenging problem. We suggest a solution for an HDR video capturing on a mobile device. We analyse dynamic range of motion regions, the regions which are the most prone to reconstruction artefacts, and suggest a real-time exposure selection algorithm. Further, an HDR content visualization task often requires an input to be in absolute values. We address this problem by presenting a calibration algorithm that can be applied to existent imagery and does not require any additional measurement hardware. Finally, as light fields use becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. To this end, we propose a multidimensional filtering approach in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6929/
[4]
A. Grycner, “Constructing Lexicons of Relational Phrases,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.
Export
BibTeX
@phdthesis{Grynerphd17, TITLE = {Constructing Lexicons of Relational Phrases}, AUTHOR = {Grycner, Adam}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69101}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.}, }
Endnote
%0 Thesis %A Grycner, Adam %Y Weikum, Gerhard %A referee: Klakow, Dietrich %A referee: Ponzetto, Simone Paolo %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Constructing Lexicons of Relational Phrases : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-933B-1 %U urn:nbn:de:bsz:291-scidok-69101 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 125 p. %V phd %9 phd %X Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6910/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[5]
S. Gurajada, “Distributed Querying of Large Labeled Graphs,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the “Labeled Graph”, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. • Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. • Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. • Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.
Export
BibTeX
@phdthesis{guraphd2017, TITLE = {Distributed Querying of Large Labeled Graphs}, AUTHOR = {Gurajada, Sairam}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-67738}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the {\textquotedblleft}Labeled Graph{\textquotedblright}, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. \mbox{$\bullet$} Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. \mbox{$\bullet$} Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. \mbox{$\bullet$} Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined {\textquotedblleft}TriAD{\textquotedblright} (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.}, }
Endnote
%0 Thesis %A Gurajada, Sairam %Y Theobald, Martin %A referee: Weikum, Gerhard %A referee: &#214;zsu, M. Tamer %A referee: Michel, Sebastian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society %T Distributed Querying of Large Labeled Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-8202-E %U urn:nbn:de:bsz:291-scidok-67738 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P x, 167 p. %V phd %9 phd %X Graph is a vital abstract data type that has profound significance in several applications. Because of its versitality, graphs have been adapted into several different forms and one such adaption with many practical applications is the &#8220;Labeled Graph&#8221;, where vertices and edges are labeled. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models, and propose a distributed solution to process them in an efficient and scalable manner. &#8226; Set Reachability. We formalize and investigate a generalization of the basic notion of reachability, called set reachability. Set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query. This is achieved by precomputation, replication, and indexing of partial reachabilities among the boundary vertices. &#8226; Basic Graph Patterns (BGP). Supported by majority of query languages, BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner. &#8226; Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching and navigational queries, and are popular in scenarios where the schema of an underlying graph is either unknown or partially known. We present a distributed solution with bimodal indexing layout that individually support efficient processing of BGP queries and navigational queries. Furthermore, we design a unified query optimizer and a processor to efficiently process GGP queries and also in a scalable manner. To this end, we propose a prototype distributed engine, coined &#8220;TriAD&#8221; (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6773/
[6]
J. Hosang, “Analysis and Improvement of the Visual Object Detection Pipeline,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression.
Export
BibTeX
@phdthesis{Hosangphd17, TITLE = {Analysis and Improvement of the Visual Object Detection Pipeline}, AUTHOR = {Hosang, Jan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69080}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression.}, }
Endnote
%0 Thesis %A Hosang, Jan %Y Schiele, Bernt %A referee: Ferrari, Vittorio %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations %T Analysis and Improvement of the Visual Object Detection Pipeline : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8CC9-B %U urn:nbn:de:bsz:291-scidok-69080 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 205 p. %V phd %9 phd %X Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. While research on image classification provides continuous progress on how to learn image representations and classifiers jointly, object detection research focuses on identifying how to properly use deep learning technology to effectively localise objects. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline. We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. Our deep network outperforms all previous neural networks for pedestrian detection by a large margin, even without using additional training data. After substantial improvements on pedestrian detection in recent years, we investigate the gap between human performance and state-of-the-art pedestrian detectors. We find that pedestrian detectors still have a long way to go before they reach human performance, and we diagnose failure modes of several top performing detectors, giving direction to future research. As a side-effect we publish new, better localised annotations for the Caltech pedestrian benchmark. We analyse detection proposals as a preprocessing step for object detectors. We establish different metrics and compare a wide range of methods according to these metrics. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. Furthermore, we address a structural weakness of virtually all object detection pipelines: non-maximum suppression. We analyse why it is necessary and what the shortcomings of the most common approach are. To address these problems, we present work to overcome these shortcomings and to replace typical non-maximum suppression with a learnable alternative. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing. In summary, this thesis provides analyses of recent pedestrian detectors and detection proposals, improves pedestrian detection by employing deep neural networks, and presents a viable alternative to traditional non-maximum suppression. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6908/
[7]
J. Kalojanov, “R-symmetry for Triangle Meshes: Detection and Applications,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.
Export
BibTeX
@phdthesis{Kalojanovphd2017, TITLE = {R-symmetry for Triangle Meshes: Detection and Applications}, AUTHOR = {Kalojanov, Javor}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.}, }
Endnote
%0 Thesis %A Kalojanov, Javor %Y Slusallek, Philipp %A referee: Wand, Michael %A referee: Mitra, Niloy %+ International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations Computer Graphics, MPI for Informatics, Max Planck Society External Organizations %T R-symmetry for Triangle Meshes: Detection and Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-96A3-B %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 94 p. %V phd %9 phd %X In this thesis, we investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, meaning that it contains identical r-neighborhoods, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposi tion into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. We obtain a small set of constructor pieces that are well suited for manufacturing and assembly by a novel method for tiling grammar simplification: We consider the connection between microtiles and noncontext-free tiling grammars and optimize a graph-based representation, finding a good balance between expressiveness, simplicity and ease of assembly. By changing the objective function, we can re-purpose the grammar simplification method for mesh compression. The microtiles of a model encode its geometrically redundant parts, which can be used for creating shape representations with minimal memory footprints. Altogether, with this work we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6787/
[8]
E. Kuzey, “Populating Knowledge bases with Temporal Information,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{KuzeyPhd2017, TITLE = {Populating Knowledge bases with Temporal Information}, AUTHOR = {Kuzey, Erdal}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Kuzey, Erdal %Y Weikum, Gerhard %A referee: de Rijke , Maarten %A referee: Suchanek, Fabian %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Populating Knowledge bases with Temporal Information : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-EAE5-7 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P XIV, 143 p. %V phd %9 phd %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6811/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[9]
M. Lapin, “Image Classification with Limited Training Data and Class Ambiguity,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations.
Export
BibTeX
@phdthesis{Lapinphd17, TITLE = {Image Classification with Limited Training Data and Class Ambiguity}, AUTHOR = {Lapin, Maksim}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69098}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations.}, }
Endnote
%0 Thesis %A Lapin, Maksim %Y Schiele, Bernt %A referee: Hein, Matthias %A referee: Lampert, Christoph %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Image Classification with Limited Training Data and Class Ambiguity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-9345-9 %U urn:nbn:de:bsz:291-scidok-69098 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 227 p. %V phd %9 phd %X Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this thesis, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this thesis, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses. All proposed learning methods are complemented with efficient optimization schemes that are based on stochastic dual coordinate ascent for convex problems and on gradient descent for nonconvex formulations. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6909/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[10]
M. Malinowski, “Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first ‘question answering about real-world images’ dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question.
Export
BibTeX
@phdthesis{Malinowskiphd17, TITLE = {Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image}, AUTHOR = {Malinowski, Mateusz}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68978}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first {\textquoteleft}question answering about real-world images{\textquoteright} dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question.}, }
Endnote
%0 Thesis %A Malinowski, Mateusz %Y Fritz, Mario %A referee: Pinkal, Manfred %A referee: Darrell, Trevor %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Towards Holistic Machines: From Visual Recognition To Question Answering About Real-world Image : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-9339-5 %U urn:nbn:de:bsz:291-scidok-68978 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 276 p. %V phd %9 phd %X Computer Vision has undergone major changes over the recent five years. Here, we investigate if the performance of such architectures generalizes to more complex tasks that require a more holistic approach to scene comprehension. The presented work focuses on learning spatial and multi-modal representations, and the foundations of a Visual Turing Test, where the scene understanding is tested by a series of questions about its content. In our studies, we propose DAQUAR, the first &#8216;question answering about real-world images&#8217; dataset together with methods, termed a symbolic-based and a neural-based visual question answering architectures, that address the problem. The symbolic-based method relies on a semantic parser, a database of visual facts, and a bayesian formulation that accounts for various interpretations of the visual scene. The neural-based method is an end-to-end architecture composed of a question encoder, image encoder, multimodal embedding, and answer decoder. This architecture has proven to be effective in capturing language-based biases. It also becomes the standard component of other visual question answering architectures. Along with the methods, we also investigate various evaluation metrics that embraces uncertainty in word's meaning, and various interpretations of the scene and the question. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6897/
[11]
S. Mukherjee, “Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.
Export
BibTeX
@phdthesis{Mukherjeephd17, TITLE = {Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities}, AUTHOR = {Mukherjee, Subhabrata}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69269}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information.}, }
Endnote
%0 Thesis %A Mukherjee, Subhabrata %Y Weikum, Gerhard %A referee: Han, Jiawei %A referee: G&#252;nnemann, Stephan %+ Databases and Information Systems, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-A648-0 %U urn:nbn:de:bsz:291-scidok-69269 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 166 p. %V phd %9 phd %X One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. Prior works in this domain operate on a static snapshot of the community, making strong assumptions about the structure of the data (e.g., relational tables), or consider only shallow features for text classification. To address the above limitations, we propose probabilistic graphical models that can leverage the joint interplay between multiple factors in online communities --- like user interactions, community dynamics, and textual content --- to automatically assess the credibility of user-contributed online content, and the expertise of users and their evolution with user-interpretable explanation. To this end, we devise new models based on Conditional Random Fields for different settings like incorporating partial expert knowledge for semi-supervised learning, and handling discrete labels as well as numeric ratings for fine-grained analysis. This enables applications such as extracting reliable side-effects of drugs from user-contributed posts in healthforums, and identifying credible content in news communities. Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This also enables applications such as identifying helpful product reviews, and detecting fake and anomalous reviews with limited information. %U http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=dehttp://scidok.sulb.uni-saarland.de/volltexte/2017/6926/
[12]
A. Rohrbach, “Generation and Grounding of Natural Language Descriptions for Visual Data,” universität des Saarlandes, Saarbrücken, 2017.
Abstract
Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.
Export
BibTeX
@phdthesis{Rohrbachphd17, TITLE = {Generation and Grounding of Natural Language Descriptions for Visual Data}, AUTHOR = {Rohrbach, Anna}, LANGUAGE = {eng}, SCHOOL = {universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.}, }
Endnote
%0 Thesis %A Rohrbach, Anna %Y Schiele, Bernt %A referee: Demberg, Vera %A referee: Darrell, Trevor %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Generation and Grounding of Natural Language Descriptions for Visual Data : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-57D4-E %I universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %8 02.06.2017 %P X, 215 p. %V phd %9 phd %X Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6874/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[13]
P. Sun, “Bi-(N-) cluster editing and its biomedical applications,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.
Export
BibTeX
@phdthesis{Sunphd17, TITLE = {Bi-(N-) cluster editing and its biomedical applications}, AUTHOR = {Sun, Peng}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69309}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.}, }
Endnote
%0 Thesis %A Sun, Peng %Y Baumbach, Jan %A referee: Guo, Jiong %A referee: Lengauer, Thomas %+ Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society External Organizations Computational Biology and Applied Algorithmics, MPI for Informatics, Max Planck Society %T Bi-(N-) cluster editing and its biomedical applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-A65E-F %U urn:nbn:de:bsz:291-scidok-69309 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 192 p. %V phd %9 phd %X he extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n _ 2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n-partite graphs and solving the clustering problem with various strategies. In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the ?-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem. To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype-phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6930/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[14]
M. Weigel, “Interactive On-Skin Devices for Expressive Touch-based Interactions,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.
Export
BibTeX
@phdthesis{Weigelphd17, TITLE = {Interactive On-Skin Devices for Expressive Touch-based Interactions}, AUTHOR = {Weigel, Martin}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-68857}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities. We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sensing with co-located visual output. The devices' conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.}, }
Endnote
%0 Thesis %A Weigel, Martin %Y Steimle, J&#252;rgen %A referee: Olwal, Alex %A referee: Kr&#252;ger, Antonio %+ Computer Graphics, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Interactive On-Skin Devices for Expressive Touch-based Interactions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-904F-D %U urn:nbn:de:bsz:291-scidok-68857 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 153 p. %V phd %9 phd %X Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid devices: skin is elastic, highly curved, and provides tactile sensation. This thesis advances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques. We