D1
Algorithms & Complexity

Technical Reports

2016
[1]
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann, and B. Wirtz, “Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization,” SFB/TR 14 AVACS, ATR103, 2016.
Abstract
This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states.
Export
BibTeX
@techreport{AlthausBeberDammEtAl2016ATR, TITLE = {Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization}, AUTHOR = {Althaus, Ernst and Beber, Bj{\"o}rn and Damm, Werner and Disch, Stefan and Hagemann, Willem and Rakow, Astrid and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER = {ATR103}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states.}, TYPE = {AVACS Technical Report}, VOLUME = {103}, }
Endnote
%0 Report %A Althaus, Ernst %A Beber, Björn %A Damm, Werner %A Disch, Stefan %A Hagemann, Willem %A Rakow, Astrid %A Scholl, Christoph %A Waldmann, Uwe %A Wirtz, Boris %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4540-0 %Y SFB/TR 14 AVACS %D 2016 %P 93 p. %X This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states. %B AVACS Technical Report %N 103 %@ false %U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_103.pdf
2013
[2]
C.-C. Huang and S. Ott, “New Results for Non-preemptive Speed Scaling,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2013-1-001, 2013.
Abstract
We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).
Export
BibTeX
@techreport{HuangOtt2013, TITLE = {New Results for Non-preemptive Speed Scaling}, AUTHOR = {Huang, Chien-Chung and Ott, Sebastian}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2013-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, ABSTRACT = {We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).}, TYPE = {Research Reports}, }
Endnote
%0 Report %A Huang, Chien-Chung %A Ott, Sebastian %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T New Results for Non-preemptive Speed Scaling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-03BF-D %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2013 %P 32 p. %X We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs). %B Research Reports %@ false
[3]
F. Makari, B. Awerbuch, R. Gemulla, R. Khandekar, J. Mestre, and M. Sozio, “A Distributed Algorithm for Large-scale Generalized Matching,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2013-5-002, 2013.
Abstract
Generalized matching problems arise in a number of applications, including computational advertising, recommender systems, and trade markets. Consider, for example, the problem of recommending multimedia items (e.g., DVDs) to users such that (1) users are recommended items that they are likely to be interested in, (2) every user gets neither too few nor too many recommendations, and (3) only items available in stock are recommended to users. State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve millions of users and items. We propose the first distributed algorithm for computing near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed to run on a small cluster of commodity nodes (or in a MapReduce environment), has strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular, we propose a novel distributed algorithm to approximately solve mixed packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world and synthetic data suggest that our algorithm scales to very large problem sizes and can be orders of magnitude faster than alternative approaches.
Export
BibTeX
@techreport{MakariAwerbuchGemullaKhandekarMestreSozio2013, TITLE = {A Distributed Algorithm for Large-scale Generalized Matching}, AUTHOR = {Makari, Faraz and Awerbuch, Baruch and Gemulla, Rainer and Khandekar, Rohit and Mestre, Julian and Sozio, Mauro}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2013-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2013}, ABSTRACT = {Generalized matching problems arise in a number of applications, including computational advertising, recommender systems, and trade markets. Consider, for example, the problem of recommending multimedia items (e.g., DVDs) to users such that (1) users are recommended items that they are likely to be interested in, (2) every user gets neither too few nor too many recommendations, and (3) only items available in stock are recommended to users. State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve millions of users and items. We propose the first distributed algorithm for computing near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed to run on a small cluster of commodity nodes (or in a MapReduce environment), has strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular, we propose a novel distributed algorithm to approximately solve mixed packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world and synthetic data suggest that our algorithm scales to very large problem sizes and can be orders of magnitude faster than alternative approaches.}, TYPE = {Research Reports}, }
Endnote
%0 Report %A Makari, Faraz %A Awerbuch, Baruch %A Gemulla, Rainer %A Khandekar, Rohit %A Mestre, Julian %A Sozio, Mauro %+ Databases and Information Systems, MPI for Informatics, Max Planck Society External Organizations Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T A Distributed Algorithm for Large-scale Generalized Matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0024-03B4-3 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2013 %P 39 p. %X Generalized matching problems arise in a number of applications, including computational advertising, recommender systems, and trade markets. Consider, for example, the problem of recommending multimedia items (e.g., DVDs) to users such that (1) users are recommended items that they are likely to be interested in, (2) every user gets neither too few nor too many recommendations, and (3) only items available in stock are recommended to users. State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve millions of users and items. We propose the first distributed algorithm for computing near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed to run on a small cluster of commodity nodes (or in a MapReduce environment), has strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular, we propose a novel distributed algorithm to approximately solve mixed packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world and synthetic data suggest that our algorithm scales to very large problem sizes and can be orders of magnitude faster than alternative approaches. %B Research Reports %@ false
2010
[4]
E. Althaus, S. Altmeyer, and R. Naujoks, “A New Combinatorial Approach to Parametric Path Analysis,” SFB/TR 14 AVACS, ATR58, 2010.
Abstract
Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path analysis. This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for path analysis.
Export
BibTeX
@techreport{Naujoks10a, TITLE = {A New Combinatorial Approach to Parametric Path Analysis}, AUTHOR = {Althaus, Ernst and Altmeyer, Sebastian and Naujoks, Rouven}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER = {ATR58}, LOCALID = {Local-ID: C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path analysis. This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for path analysis.}, TYPE = {AVACS Technical Report}, VOLUME = {58}, }
Endnote
%0 Report %A Althaus, Ernst %A Altmeyer, Sebastian %A Naujoks, Rouven %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A New Combinatorial Approach to Parametric Path Analysis : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-15F7-8 %F EDOC: 536763 %F OTHER: Local-ID: C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a %Y SFB/TR 14 AVACS %D 2010 %P 33 p. %X Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path analysis. This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for path analysis. %B AVACS Technical Report %N 58 %@ false %U http://www.avacs.org/Publikationen/Open/avacs_technical_report_058.pdf
[5]
E. Berberich, M. Hemmer, and M. Kerber, “A Generic Algebraic Kernel for Non-linear Geometric Applications,” INRIA, Sophia Antipolis, France, 7274, 2010.
Export
BibTeX
@techreport{bhk-ak2-inria-2010, TITLE = {A Generic Algebraic Kernel for Non-linear Geometric Applications}, AUTHOR = {Berberich, Eric and Hemmer, Michael and Kerber, Michael}, LANGUAGE = {eng}, URL = {http://hal.inria.fr/inria-00480031/fr/}, NUMBER = {7274}, LOCALID = {Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis, France}, YEAR = {2010}, DATE = {2010}, TYPE = {Rapport de recherche / INRIA}, }
Endnote
%0 Report %A Berberich, Eric %A Hemmer, Michael %A Kerber, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Generic Algebraic Kernel for Non-linear Geometric Applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-15EC-2 %F EDOC: 536754 %U http://hal.inria.fr/inria-00480031/fr/ %F OTHER: Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010 %Y INRIA %C Sophia Antipolis, France %D 2010 %P 20 p. %B Rapport de recherche / INRIA
[6]
C.-C. Huang and T. Kavitha, “Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2010-1-001, 2010.
Abstract
We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of $\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and $m = |E|$.
Export
BibTeX
@techreport{HuangKavitha2010, TITLE = {Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists}, AUTHOR = {Huang, Chien-Chung and Kavitha, Telikepalli}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001}, NUMBER = {MPI-I-2010-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of $\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and $m = |E|$.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Huang, Chien-Chung %A Kavitha, Telikepalli %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6668-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 17 p. %X We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of $\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and $m = |E|$. %B Research Report
[7]
S. Seufert, S. Bedathur, J. Mestre, and G. Weikum, “Bonsai: Growing Interesting Small Trees,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2010-5-005, 2010.
Abstract
Graphs are increasingly used to model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in bioinformatics but also has applications on the web.
Export
BibTeX
@techreport{Seufert2010a, TITLE = {Bonsai: Growing Interesting Small Trees}, AUTHOR = {Seufert, Stephan and Bedathur, Srikanta and Mestre, Julian and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005}, NUMBER = {MPI-I-2010-5-005}, LOCALID = {Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2010}, DATE = {2010}, ABSTRACT = {Graphs are increasingly used to model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in bioinformatics but also has applications on the web.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Seufert, Stephan %A Bedathur, Srikanta %A Mestre, Julian %A Weikum, Gerhard %+ Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Bonsai: Growing Interesting Small Trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-14D8-7 %F EDOC: 536383 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005 %F OTHER: Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2010 %P 32 p. %X Graphs are increasingly used to model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in bioinformatics but also has applications on the web. %B Research Report
2008
[8]
D. Ajwani, I. Malinger, U. Meyer, and S. Toledo, “Characterizing the performance of Flash memory storage devices and its impact on algorithm design,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2008-1-001, 2008.
Abstract
Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together.
Export
BibTeX
@techreport{AjwaniMalingerMeyerToledo2008, TITLE = {Characterizing the performance of Flash memory storage devices and its impact on algorithm design}, AUTHOR = {Ajwani, Deepak and Malinger, Itay and Meyer, Ulrich and Toledo, Sivan}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001}, NUMBER = {MPI-I-2008-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Ajwani, Deepak %A Malinger, Itay %A Meyer, Ulrich %A Toledo, Sivan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Characterizing the performance of Flash memory storage devices and its impact on algorithm design : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-66C7-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2008 %P 36 p. %X Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together. %B Research Report
[9]
E. Berberich, M. Hemmer, M. Karavelas, S. Pion, M. Teillaud, and E. Tsigaridas, “Prototype Implementation of the Algebraic Kernel,” University of Groningen, Groningen, ACS-TR-121202-01, 2008.
Abstract
In this report we describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials.
Export
BibTeX
@techreport{ACS-TR-121202-01, TITLE = {Prototype Implementation of the Algebraic Kernel}, AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos and Pion, Sylvain and Teillaud, Monique and Tsigaridas, Elias}, LANGUAGE = {eng}, NUMBER = {ACS-TR-121202-01}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen}, YEAR = {2008}, DATE = {2008}, ABSTRACT = {In this report we describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials.}, }
Endnote
%0 Report %A Berberich, Eric %A Hemmer, Michael %A Karavelas, Menelaos %A Pion, Sylvain %A Teillaud, Monique %A Tsigaridas, Elias %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Prototype Implementation of the Algebraic Kernel : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E387-2 %Y University of Groningen %C Groningen %D 2008 %X In this report we describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials. %U http://www.researchgate.net/publication/254300442_Prototype_implementation_of_the_algebraic_kernel
2007
[10]
E. Althaus and S. Canzar, “A Lagrangian relaxation approach for the multiple sequence alignment problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2007-1-002, 2007.
Abstract
We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for the multiple sequence alignment problem.
Export
BibTeX
@techreport{, TITLE = {A Lagrangian relaxation approach for the multiple sequence alignment problem}, AUTHOR = {Althaus, Ernst and Canzar, Stefan}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-002}, NUMBER = {MPI-I-2007-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for the multiple sequence alignment problem.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Althaus, Ernst %A Canzar, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Lagrangian relaxation approach for the multiple sequence alignment problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6707-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 41 p. %X We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for the multiple sequence alignment problem. %B Research Report / Max-Planck-Institut für Informatik
[11]
E. Berberich and M. Meyerovitch, “Computing Envelopes of Quadrics,” University of Groningen, Groningen, The Netherlands, ACS-TR-241402-03, 2007.
Export
BibTeX
@techreport{acs:bm-ceq-07, TITLE = {Computing Envelopes of Quadrics}, AUTHOR = {Berberich, Eric and Meyerovitch, Michal}, LANGUAGE = {eng}, NUMBER = {ACS-TR-241402-03}, LOCALID = {Local-ID: C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
Endnote
%0 Report %A Berberich, Eric %A Meyerovitch, Michal %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Computing Envelopes of Quadrics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1EA4-F %F EDOC: 356718 %F OTHER: Local-ID: C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 5 p. %B ACS Technical Reports
[12]
E. Berberich and L. Kettner, “Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2007-1-001, 2007.
Export
BibTeX
@techreport{bk-reorder-07, TITLE = {Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point}, AUTHOR = {Berberich, Eric and Kettner, Lutz}, LANGUAGE = {eng}, ISSN = {0946-011X}, NUMBER = {MPI-I-2007-1-001}, LOCALID = {Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, TYPE = {Research Report}, }
Endnote
%0 Report %A Berberich, Eric %A Kettner, Lutz %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1FB9-8 %F EDOC: 356668 %@ 0946-011X %F OTHER: Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 20 p. %B Research Report
[13]
E. Berberich, M. Hemmer, M. I. Karavelas, and M. Teillaud, “Revision of interface specification of algebraic kernel,” University of Groningen, Groningen, The Netherlands, 2007.
Export
BibTeX
@techreport{acs:bhkt-risak-06, TITLE = {Revision of interface specification of algebraic kernel}, AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos I. and Teillaud, Monique}, LANGUAGE = {eng}, LOCALID = {Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
Endnote
%0 Report %A Berberich, Eric %A Hemmer, Michael %A Karavelas, Menelaos I. %A Teillaud, Monique %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Revision of interface specification of algebraic kernel : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-208F-0 %F EDOC: 356661 %F OTHER: Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06 %F OTHER: ACS-TR-243301-01 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 100 p. %B ACS Technical Reports
[14]
E. Berberich, E. Fogel, D. Halperin, K. Mehlhorn, and R. Wein, “Sweeping and maintaining two-dimensional arrangements on quadrics,” University of Groningen, Groningen, The Netherlands, ACS-TR-241402-02, 2007.
Export
BibTeX
@techreport{acs:bfhmw-smtaoq-07, TITLE = {Sweeping and maintaining two-dimensional arrangements on quadrics}, AUTHOR = {Berberich, Eric and Fogel, Efi and Halperin, Dan and Mehlhorn, Kurt and Wein, Ron}, LANGUAGE = {eng}, NUMBER = {ACS-TR-241402-02}, LOCALID = {Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
Endnote
%0 Report %A Berberich, Eric %A Fogel, Efi %A Halperin, Dan %A Mehlhorn, Kurt %A Wein, Ron %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sweeping and maintaining two-dimensional arrangements on quadrics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-20E3-1 %F EDOC: 356692 %F OTHER: Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 10 p. %B ACS Technical Reports
[15]
E. Berberich and M. Hemmer, “Definition of the 3D Quadrical Kernel Content,” University of Groningen, Groningen, The Netherlands, ACS-TR-243302-02, 2007.
Export
BibTeX
@techreport{acs:bh-dtqkc-07, TITLE = {Definition of the {3D} Quadrical Kernel Content}, AUTHOR = {Berberich, Eric and Hemmer, Michael}, LANGUAGE = {eng}, NUMBER = {ACS-TR-243302-02}, LOCALID = {Local-ID: C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
Endnote
%0 Report %A Berberich, Eric %A Hemmer, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Definition of the 3D Quadrical Kernel Content : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1ED4-1 %F EDOC: 356735 %F OTHER: Local-ID: C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 25 p. %B ACS Technical Reports
[16]
E. Berberich, M. Caroli, and N. Wolpert, “Exact Computation of Arrangements of Rotated Conics,” University of Groningen, Groningen, The Netherlands, ACS-TR-123104-03, 2007.
Export
BibTeX
@techreport{acs:bcw-carc-07, TITLE = {Exact Computation of Arrangements of Rotated Conics}, AUTHOR = {Berberich, Eric and Caroli, Manuel and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {ACS-TR-123104-03}, LOCALID = {Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
Endnote
%0 Report %A Berberich, Eric %A Caroli, Manuel %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Exact Computation of Arrangements of Rotated Conics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-1F20-F %F EDOC: 356666 %F OTHER: Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 5 p %B ACS Technical Reports
[17]
E. Berberich, E. Fogel, and A. Meyer, “Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves,” University of Groningen, Groningen, The Netherlands, ACS-TR-243305-01, 2007.
Export
BibTeX
@techreport{acs:bfm-uwibaqpac-07, TITLE = {Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves}, AUTHOR = {Berberich, Eric and Fogel, Efi and Meyer, Andreas}, LANGUAGE = {eng}, NUMBER = {ACS-TR-243305-01}, LOCALID = {Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2007}, DATE = {2007}, TYPE = {ACS Technical Reports}, }
Endnote
%0 Report %A Berberich, Eric %A Fogel, Efi %A Meyer, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2128-E %F EDOC: 356664 %F OTHER: Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07 %Y University of Groningen %C Groningen, The Netherlands %D 2007 %P 5 p. %B ACS Technical Reports
[18]
A. Eigenwillig, L. Kettner, and N. Wolpert, “Snap Rounding of Bézier Curves,” Max-Planck-Institut für Informatik, Saarbrücken, Germany, MPI-I-2006-1-005, 2007.
Export
BibTeX
@techreport{ACS-TR-121108-01, TITLE = {Snap Rounding of B{\'e}zier Curves}, AUTHOR = {Eigenwillig, Arno and Kettner, Lutz and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {MPI-I-2006-1-005}, LOCALID = {Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR = {2007}, DATE = {2007}, TYPE = {Research Report}, }
Endnote
%0 Report %A Eigenwillig, Arno %A Kettner, Lutz %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Snap Rounding of Bézier Curves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-20B9-0 %F EDOC: 356760 %F OTHER: Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01 %F OTHER: ACS-TR-121108-01 %Y Max-Planck-Institut für Informatik %C Saarbrücken, Germany %D 2007 %P 19 p. %B Research Report
[19]
A. Gidenstam and M. Papatriantafilou, “LFthreads: a lock-free thread library,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2007-1-003, 2007.
Abstract
This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new method, which we call responsibility hand-off (RHO), that does not need any special kernel support.
Export
BibTeX
@techreport{, TITLE = {{LFthreads}: a lock-free thread library}, AUTHOR = {Gidenstam, Anders and Papatriantafilou, Marina}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-003}, NUMBER = {MPI-I-2007-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2007}, DATE = {2007}, ABSTRACT = {This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new method, which we call responsibility hand-off (RHO), that does not need any special kernel support.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Gidenstam, Anders %A Papatriantafilou, Marina %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T LFthreads: a lock-free thread library : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-66F8-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2007 %P 36 p. %X This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new method, which we call responsibility hand-off (RHO), that does not need any special kernel support. %B Research Report / Max-Planck-Institut für Informatik
2006
[20]
H. Bast, I. Weber, and C. W. Mortensen, “Output-sensitive autocompletion search,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2006-1-007, 2006.
Abstract
We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$ of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound.
Export
BibTeX
@techreport{, TITLE = {Output-sensitive autocompletion search}, AUTHOR = {Bast, Holger and Weber, Ingmar and Mortensen, Christian Worm}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-007}, NUMBER = {MPI-I-2006-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$ of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Bast, Holger %A Weber, Ingmar %A Mortensen, Christian Worm %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Output-sensitive autocompletion search : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-681A-D %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-007 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 17 p. %X We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$ of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound. %B Research Report / Max-Planck-Institut für Informatik
[21]
H. Bast, D. Majumdar, R. Schenkel, C. Theobalt, and G. Weikum, “IO-Top-k: index-access optimized top-k query processing,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2006-5-002, 2006.
Abstract
Top-k query processing is an important building block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores, selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms.
Export
BibTeX
@techreport{BastMajumdarSchenkelTheobaldWeikum2006, TITLE = {{IO}-Top-k: index-access optimized top-k query processing}, AUTHOR = {Bast, Holger and Majumdar, Debapriyo and Schenkel, Ralf and Theobalt, Christian and Weikum, Gerhard}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002}, NUMBER = {MPI-I-2006-5-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {Top-k query processing is an important building block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores, selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Bast, Holger %A Majumdar, Debapriyo %A Schenkel, Ralf %A Theobalt, Christian %A Weikum, Gerhard %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Computer Graphics, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T IO-Top-k: index-access optimized top-k query processing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6716-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 49 p. %X Top-k query processing is an important building block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores, selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms. %B Research Report / Max-Planck-Institut für Informatik
[22]
E. Berberich, F. Ebert, and L. Kettner, “Definition of File Format for Benchmark Instances for Arrangements of Quadrics,” University of Groningen, Groningen, The Netherlands, ACS-TR-123109-01, 2006.
Export
BibTeX
@techreport{acs:bek-dffbiaq-06, TITLE = {Definition of File Format for Benchmark Instances for Arrangements of Quadrics}, AUTHOR = {Berberich, Eric and Ebert, Franziska and Kettner, Lutz}, LANGUAGE = {eng}, NUMBER = {ACS-TR-123109-01}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2006}, DATE = {2006}, }
Endnote
%0 Report %A Berberich, Eric %A Ebert, Franziska %A Kettner, Lutz %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Definition of File Format for Benchmark Instances for Arrangements of Quadrics : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E509-E %Y University of Groningen %C Groningen, The Netherlands %D 2006
[23]
E. Berberich, F. Ebert, E. Fogel, and L. Kettner, “Web-site with Benchmark Instances for Planar Curve Arrangements,” University of Groningen, Groningen, The Netherlands, ACS-TR-123108-01, 2006.
Export
BibTeX
@techreport{acs:bek-wbipca-06, TITLE = {Web-site with Benchmark Instances for Planar Curve Arrangements}, AUTHOR = {Berberich, Eric and Ebert, Franziska and Fogel, Efi and Kettner, Lutz}, LANGUAGE = {eng}, NUMBER = {ACS-TR-123108-01}, INSTITUTION = {University of Groningen}, ADDRESS = {Groningen, The Netherlands}, YEAR = {2006}, DATE = {2006}, }
Endnote
%0 Report %A Berberich, Eric %A Ebert, Franziska %A Fogel, Efi %A Kettner, Lutz %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Web-site with Benchmark Instances for Planar Curve Arrangements : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E515-1 %Y University of Groningen %C Groningen, The Netherlands %D 2006
[24]
B. Doerr and M. Gnewuch, “Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding,” University Kiel, Kiel, 06-14, 2006.
Abstract
We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.
Export
BibTeX
@techreport{SemKiel, TITLE = {Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding}, AUTHOR = {Doerr, Benjamin and Gnewuch, Michael}, LANGUAGE = {eng}, NUMBER = {06-14}, INSTITUTION = {University Kiel}, ADDRESS = {Kiel}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.}, }
Endnote
%0 Report %A Doerr, Benjamin %A Gnewuch, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E49F-6 %Y University Kiel %C Kiel %D 2006 %X We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.
[25]
K. Elbassioni, “On the Complexity of Monotone Boolean Duality Testing,” DIMACS, Piscataway, NJ, DIMACS TR: 2006-01, 2006.
Abstract
We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all minimal transversals of a given hypergraph using only polynomial space.
Export
BibTeX
@techreport{Elbassioni2006, TITLE = {On the Complexity of Monotone {Boolean} Duality Testing}, AUTHOR = {Elbassioni, Khaled}, LANGUAGE = {eng}, NUMBER = {DIMACS TR: 2006-01}, INSTITUTION = {DIMACS}, ADDRESS = {Piscataway, NJ}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all minimal transversals of a given hypergraph using only polynomial space.}, }
Endnote
%0 Report %A Elbassioni, Khaled %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Complexity of Monotone Boolean Duality Testing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E4CA-2 %Y DIMACS %C Piscataway, NJ %D 2006 %X We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all minimal transversals of a given hypergraph using only polynomial space.
[26]
S. Funke, C. Klein, K. Mehlhorn, and S. Schmitt, “Controlled Perturbation for Delaunay Triangulations,” Algorithms for Complex Shapes with certified topology and numerics, Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS, ACS-TR-121103-03, 2006.
Export
BibTeX
@techreport{acstr123109-01, TITLE = {Controlled Perturbation for Delaunay Triangulations}, AUTHOR = {Funke, Stefan and Klein, Christian and Mehlhorn, Kurt and Schmitt, Susanne}, LANGUAGE = {eng}, NUMBER = {ACS-TR-121103-03}, INSTITUTION = {Algorithms for Complex Shapes with certified topology and numerics}, ADDRESS = {Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS}, YEAR = {2006}, DATE = {2006}, }
Endnote
%0 Report %A Funke, Stefan %A Klein, Christian %A Mehlhorn, Kurt %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Controlled Perturbation for Delaunay Triangulations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-F72F-3 %Y Algorithms for Complex Shapes with certified topology and numerics %C Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS %D 2006
[27]
S. Funke, S. Laue, R. Naujoks, and L. Zvi, “Power assignment problems in wireless communication,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2006-1-004, 2006.
Abstract
A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform $k$-hop multicasts.
Export
BibTeX
@techreport{, TITLE = {Power assignment problems in wireless communication}, AUTHOR = {Funke, Stefan and Laue, S{\"o}ren and Naujoks, Rouven and Zvi, Lotker}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004}, NUMBER = {MPI-I-2006-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform $k$-hop multicasts.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Funke, Stefan %A Laue, Sören %A Naujoks, Rouven %A Zvi, Lotker %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Power assignment problems in wireless communication : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6820-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 25 p. %X A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform $k$-hop multicasts. %B Research Report / Max-Planck-Institut für Informatik
[28]
M. Kerber, “Division-free computation of subresultants using bezout matrices,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2006-1-006, 2006.
Abstract
We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.
Export
BibTeX
@techreport{, TITLE = {Division-free computation of subresultants using bezout matrices}, AUTHOR = {Kerber, Michael}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-006}, NUMBER = {MPI-I-2006-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2006}, DATE = {2006}, ABSTRACT = {We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Kerber, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Division-free computation of subresultants using bezout matrices : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-681D-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-006 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2006 %P 20 p. %X We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates. %B Research Report / Max-Planck-Institut für Informatik
2005
[29]
S. Baswana and K. Telikepalli, “Improved algorithms for all-pairs approximate shortest paths in weighted graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2005-1-003, 2005.
Abstract
The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results for this problem.
Export
BibTeX
@techreport{, TITLE = {Improved algorithms for all-pairs approximate shortest paths in weighted graphs}, AUTHOR = {Baswana, Surender and Telikepalli, Kavitha}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003}, NUMBER = {MPI-I-2005-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results for this problem.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Baswana, Surender %A Telikepalli, Kavitha %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Improved algorithms for all-pairs approximate shortest paths in weighted graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6854-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 26 p. %X The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results for this problem. %B Research Report / Max-Planck-Institut für Informatik
[30]
R. Dementiev, L. Kettner, and P. Sanders, “STXXL: Standard Template Library for XXL Data Sets,” Fakultät für Informatik, University of Karlsruhe, Karlsruhe, Germany, 2005/18, 2005.
Export
BibTeX
@techreport{Kettner2005StxxlReport, TITLE = {{STXXL}: Standard Template Library for {XXL} Data Sets}, AUTHOR = {Dementiev, Roman and Kettner, Lutz and Sanders, Peter}, LANGUAGE = {eng}, NUMBER = {2005/18}, INSTITUTION = {Fakult{\"a}t f{\"u}r Informatik, University of Karlsruhe}, ADDRESS = {Karlsruhe, Germany}, YEAR = {2005}, DATE = {2005}, }
Endnote
%0 Report %A Dementiev, Roman %A Kettner, Lutz %A Sanders, Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T STXXL: Standard Template Library for XXL Data Sets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-E689-4 %Y Fakultät für Informatik, University of Karlsruhe %C Karlsruhe, Germany %D 2005
[31]
C. Gotsman, K. Kaligosi, K. Mehlhorn, D. Michail, and E. Pyrga, “Cycle bases of graphs and sampled manifolds,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2005-1-008, 2005.
Abstract
Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments.
Export
BibTeX
@techreport{, TITLE = {Cycle bases of graphs and sampled manifolds}, AUTHOR = {Gotsman, Craig and Kaligosi, Kanela and Mehlhorn, Kurt and Michail, Dimitrios and Pyrga, Evangelia}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-008}, NUMBER = {MPI-I-2005-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Gotsman, Craig %A Kaligosi, Kanela %A Mehlhorn, Kurt %A Michail, Dimitrios %A Pyrga, Evangelia %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Cycle bases of graphs and sampled manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-684C-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-008 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 30 p. %X Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments. %B Research Report / Max-Planck-Institut für Informatik
[32]
I. Katriel, M. Kutz, and M. Skutella, “Reachability substitutes for planar digraphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2005-1-002, 2005.
Abstract
Given a digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results for planar dags, a separation procedure and a reachability theorem, which might be of independent interest.
Export
BibTeX
@techreport{, TITLE = {Reachability substitutes for planar digraphs}, AUTHOR = {Katriel, Irit and Kutz, Martin and Skutella, Martin}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-002}, NUMBER = {MPI-I-2005-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Given a digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results for planar dags, a separation procedure and a reachability theorem, which might be of independent interest.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Katriel, Irit %A Kutz, Martin %A Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Reachability substitutes for planar digraphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6859-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-002 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 24 p. %X Given a digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results for planar dags, a separation procedure and a reachability theorem, which might be of independent interest. %B Research Report / Max-Planck-Institut für Informatik
[33]
I. Katriel and M. Kutz, “A faster algorithm for computing a longest common increasing subsequence,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2005-1-007, 2005.
Abstract
Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$.
Export
BibTeX
@techreport{, TITLE = {A faster algorithm for computing a longest common increasing subsequence}, AUTHOR = {Katriel, Irit and Kutz, Martin}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-007}, NUMBER = {MPI-I-2005-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Katriel, Irit %A Kutz, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A faster algorithm for computing a longest common increasing subsequence : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-684F-8 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-007 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 13 p. %X Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$. %B Research Report / Max-Planck-Institut für Informatik
[34]
D. Michail, “Rank-maximal through maximum weight matchings,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2005-1-001, 2005.
Abstract
Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm.
Export
BibTeX
@techreport{, TITLE = {Rank-maximal through maximum weight matchings}, AUTHOR = {Michail, Dimitrios}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-001}, NUMBER = {MPI-I-2005-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2005}, DATE = {2005}, ABSTRACT = {Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm.}, TYPE = {Research Report / Max-Planck-Institut für Informatik}, }
Endnote
%0 Report %A Michail, Dimitrios %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Rank-maximal through maximum weight matchings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-685C-A %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-001 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2005 %P 22 p. %X Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm. %B Research Report / Max-Planck-Institut für Informatik
2004
[35]
N. Beldiceanu, I. Katriel, and S. Thiel, “Filtering algorithms for the Same and UsedBy constraints,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-001, Jan. 2004.
Export
BibTeX
@techreport{, TITLE = {Filtering algorithms for the Same and {UsedBy} constraints}, AUTHOR = {Beldiceanu, Nicolas and Katriel, Irit and Thiel, Sven}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-01}, TYPE = {Research Report}, }
Endnote
%0 Report %A Beldiceanu, Nicolas %A Katriel, Irit %A Thiel, Sven %+ Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Filtering algorithms for the Same and UsedBy constraints : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-290C-C %F EDOC: 237881 %Y Max-Planck-Institut für Informatik %C Saarbrücken %D 2004 %P 33 p. %B Research Report
[36]
E. Berberich, A. Eigenwillig, M. Hemmer, S. Hert, L. Kettner, K. Mehlhorn, J. Reichel, S. Schmitt, E. Schömer, D. Weber, and N. Wolpert, “EXACUS : Efficient and Exact Algorithms for Curves and Surfaces,” INRIA, Sophia Antipolis, ECG-TR-361200-02, 2004.
Export
BibTeX
@techreport{Berberich_ECG-TR-361200-02, TITLE = {{EXACUS} : Efficient and Exact Algorithms for Curves and Surfaces}, AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Hemmer, Michael and Hert, Susan and Kettner, Lutz and Mehlhorn, Kurt and Reichel, Joachim and Schmitt, Susanne and Sch{\"o}mer, Elmar and Weber, Dennis and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {ECG-TR-361200-02}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
Endnote
%0 Report %A Berberich, Eric %A Eigenwillig, Arno %A Hemmer, Michael %A Hert, Susan %A Kettner, Lutz %A Mehlhorn, Kurt %A Reichel, Joachim %A Schmitt, Susanne %A Schömer, Elmar %A Weber, Dennis %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T EXACUS : Efficient and Exact Algorithms for Curves and Surfaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B89-6 %F EDOC: 237751 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 8 p. %B ECG Technical Report
[37]
E. Berberich, A. Eigenwillig, I. Emiris, E. Fogel, M. Hemmer, D. Halperin, A. Kakargias, L. Kettner, K. Mehlhorn, S. Pion, E. Schömer, M. Teillaud, R. Wein, and N. Wolpert, “An empirical comparison of software for constructing arrangements of curved arcs,” INRIA, Sophia Antipolis, ECG-TR-361200-01, 2004.
Export
BibTeX
@techreport{Berberich_ECG-TR-361200-01, TITLE = {An empirical comparison of software for constructing arrangements of curved arcs}, AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Emiris, Ioannis and Fogel, Efraim and Hemmer, Michael and Halperin, Dan and Kakargias, Athanasios and Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Sch{\"o}mer, Elmar and Teillaud, Monique and Wein, Ron and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {ECG-TR-361200-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
Endnote
%0 Report %A Berberich, Eric %A Eigenwillig, Arno %A Emiris, Ioannis %A Fogel, Efraim %A Hemmer, Michael %A Halperin, Dan %A Kakargias, Athanasios %A Kettner, Lutz %A Mehlhorn, Kurt %A Pion, Sylvain %A Schömer, Elmar %A Teillaud, Monique %A Wein, Ron %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T An empirical comparison of software for constructing arrangements of curved arcs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B87-A %F EDOC: 237743 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 11 p. %B ECG Technical Report
[38]
L. S. Chandran and N. Sivadasan, “On the Hadwiger’s Conjecture for Graphs Products,” Max-Planck-Institut für Informatik, Saarbrücken, Germany, MPI-I-2004-1-006, 2004.
Export
BibTeX
@techreport{TR2004, TITLE = {On the {Hadwiger's} Conjecture for Graphs Products}, AUTHOR = {Chandran, L. Sunil and Sivadasan, N.}, LANGUAGE = {eng}, ISBN = {0946-011X}, NUMBER = {MPI-I-2004-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken, Germany}, YEAR = {2004}, DATE = {2004}, TYPE = {Research Report}, }
Endnote
%0 Report %A Chandran, L. Sunil %A Sivadasan, N. %+ Discrete Optimization, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Hadwiger's Conjecture for Graphs Products : %G eng %U http://hdl.handle.net/11858/00-001M-0000-001A-0C8F-A %@ 0946-011X %Y Max-Planck-Institut für Informatik %C Saarbrücken, Germany %D 2004 %B Research Report
[39]
L. S. Chandran and N. Sivadasan, “On the Hadwiger’s conjecture for graph products,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-006, 2004.
Export
BibTeX
@techreport{, TITLE = {On the Hadwiger's conjecture for graph products}, AUTHOR = {Chandran, L. Sunil and Sivadasan, Naveen}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, TYPE = {Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report}, EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
Endnote
%0 Report %A Chandran, L. Sunil %A Sivadasan, Naveen %+ Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Hadwiger's conjecture for graph products : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2BA6-4 %F EDOC: 241593 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2004 %P 10 p. %B Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report
[40]
S. Funke, K. Mehlhorn, S. Schmitt, C. Burnikel, R. Fleischer, and S. Schirra, “The LEDA class real number - extended version,” INRIA, Sophia Antipolis, ECG-TR-363110-01, 2004.
Export
BibTeX
@techreport{Funke_ECG-TR-363110-01, TITLE = {The {LEDA} class real number -- extended version}, AUTHOR = {Funke, Stefan and Mehlhorn, Kurt and Schmitt, Susanne and Burnikel, Christoph and Fleischer, Rudolf and Schirra, Stefan}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363110-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
Endnote
%0 Report %A Funke, Stefan %A Mehlhorn, Kurt %A Schmitt, Susanne %A Burnikel, Christoph %A Fleischer, Rudolf %A Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The LEDA class real number - extended version : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B8C-F %F EDOC: 237780 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 2 p. %B ECG Technical Report
[41]
M. Hemmer, L. Kettner, and E. Schömer, “Effects of a modular filter on geometric applications,” INRIA, Sophia Antipolis, ECG-TR-363111-01, 2004.
Export
BibTeX
@techreport{Hemmer_ECG-TR-363111-01, TITLE = {Effects of a modular filter on geometric applications}, AUTHOR = {Hemmer, Michael and Kettner, Lutz and Sch{\"o}mer, Elmar}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363111-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
Endnote
%0 Report %A Hemmer, Michael %A Kettner, Lutz %A Sch&#246;mer, Elmar %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Effects of a modular filter on geometric applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B8F-9 %F EDOC: 237782 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 7 p. %B ECG Technical Report
[42]
I. Katriel, “On algorithms for online topological ordering and sorting,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-003, Feb. 2004.
Export
BibTeX
@techreport{, TITLE = {On algorithms for online topological ordering and sorting}, AUTHOR = {Katriel, Irit}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-02}, TYPE = {Research Report}, }
Endnote
%0 Report %A Katriel, Irit %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On algorithms for online topological ordering and sorting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2906-7 %F EDOC: 237878 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2004 %P 12 p. %B Research Report
[43]
L. Kettner, K. Mehlhorn, S. Pion, S. Schirra, and C. Yap, “Classroom examples of robustness problems in geometric computations,” INRIA, Sophia Antipolis, ECG-TR-363100-01, 2004.
Export
BibTeX
@techreport{Kettner_ECG-TR-363100-01, TITLE = {Classroom examples of robustness problems in geometric computations}, AUTHOR = {Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Schirra, Stefan and Yap, Chee}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363100-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, VOLUME = {3221}, }
Endnote
%0 Report %A Kettner, Lutz %A Mehlhorn, Kurt %A Pion, Sylvain %A Schirra, Stefan %A Yap, Chee %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Classroom examples of robustness problems in geometric computations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B92-0 %F EDOC: 237797 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 12 p. %B ECG Technical Report %N 3221
[44]
C. Klein, “A fast root checking algorithm,” INRIA, Sophia Antipolis, ECG-TR-363109-02, 2004.
Export
BibTeX
@techreport{Klein_ECG-TR-363109-02, TITLE = {A fast root checking algorithm}, AUTHOR = {Klein, Christian}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363109-02}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective computational geometry for curves and surfaces}}, }
Endnote
%0 Report %A Klein, Christian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A fast root checking algorithm : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B96-8 %F EDOC: 237826 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 11 p. %B ECG Technical Report
[45]
W. Krandick and K. Mehlhorn, “New bounds for the Descartes method,” Drexel University, Philadelphia, Pa., DU-CS-04-04, 2004.
Export
BibTeX
@techreport{Krandick_DU-CS-04-04, TITLE = {New bounds for the Descartes method}, AUTHOR = {Krandick, Werner and Mehlhorn, Kurt}, LANGUAGE = {eng}, NUMBER = {DU-CS-04-04}, INSTITUTION = {Drexel University}, ADDRESS = {Philadelphia, Pa.}, YEAR = {2004}, DATE = {2004}, TYPE = {Drexel University / Department of Computer Science:Technical Report}, EDITOR = {{Drexel University {\textless}Philadelphia, Pa.{\textgreater} / Department of Computer Science}}, }
Endnote
%0 Report %A Krandick, Werner %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T New bounds for the Descartes method : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B99-2 %F EDOC: 237829 %Y Drexel University %C Philadelphia, Pa. %D 2004 %P 18 p. %B Drexel University / Department of Computer Science:Technical Report
[46]
P. Sanders and S. Pettie, “A simpler linear time 2/3-epsilon approximation,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-002, Jan. 2004.
Export
BibTeX
@techreport{, TITLE = {A simpler linear time 2/3-epsilon approximation}, AUTHOR = {Sanders, Peter and Pettie, Seth}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-01}, TYPE = {Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report}, EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
Endnote
%0 Report %A Sanders, Peter %A Pettie, Seth %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A simpler linear time 2/3-epsilon approximation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2909-1 %F EDOC: 237880 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2004 %P 7 p. %B Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report
[47]
P. Sanders and S. Pettie, “A simpler linear time 2/3 - epsilon approximation for maximum weight matching,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-002, 2004.
Abstract
We present two $\twothirds - \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$.
Export
BibTeX
@techreport{, TITLE = {A simpler linear time 2/3 -- epsilon approximation for maximum weight matching}, AUTHOR = {Sanders, Peter and Pettie, Seth}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-002}, NUMBER = {MPI-I-2004-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, ABSTRACT = {We present two $\twothirds -- \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sanders, Peter %A Pettie, Seth %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A simpler linear time 2/3 - epsilon approximation for maximum weight matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6862-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-002 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2004 %P 10 p. %X We present two $\twothirds - \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[48]
S. Schmitt, “Common subexpression search in LEDA_reals : a study of the diamond-operator,” INRIA, Sophia Antipolis, ECG-TR-363109-01, 2004.
Export
BibTeX
@techreport{Schmitt_ECG-TR-363109-01, TITLE = {Common subexpression search in {LEDA}{\textunderscore}reals : a study of the diamond-operator}, AUTHOR = {Schmitt, Susanne}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363109-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Technical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
Endnote
%0 Report %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Common subexpression search in LEDA_reals : a study of the diamond-operator : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B9C-B %F EDOC: 237830 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 5 p. %B ECG Technical Report
[49]
S. Schmitt, “Improved separation bounds for the diamond operator,” INRIA, Sophia Antipolis, ECG-TR-363108-01, 2004.
Export
BibTeX
@techreport{Schmitt_ECG-TR-363108-01, TITLE = {Improved separation bounds for the diamond operator}, AUTHOR = {Schmitt, Susanne}, LANGUAGE = {eng}, NUMBER = {ECG-TR-363108-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia Antipolis}, YEAR = {2004}, DATE = {2004}, TYPE = {ECG Techical Report}, EDITOR = {{Effective Computational Geometry for Curves and Surfaces}}, }
Endnote
%0 Report %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Improved separation bounds for the diamond operator : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-2B9F-5 %F EDOC: 237831 %Y INRIA %C Sophia Antipolis %D 2004 %Z name of event: Untitled Event %Z date of event: - %Z place of event: %P 13 p. %B ECG Techical Report
[50]
S. Schmitt and L. Fousse, “A comparison of polynomial evaluation schemes,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-005, Jun. 2004.
Export
BibTeX
@techreport{, TITLE = {A comparison of polynomial evaluation schemes}, AUTHOR = {Schmitt, Susanne and Fousse, Laurent}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-06}, TYPE = {Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report}, EDITOR = {Becker and {Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
Endnote
%0 Report %A Schmitt, Susanne %A Fousse, Laurent %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A comparison of polynomial evaluation schemes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28EC-B %F EDOC: 237875 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2004 %P 16 p. %B Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report
[51]
N. Sivadasan, P. Sanders, and M. Skutella, “On scheduling with bounded migration,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-004, May 2004.
Export
BibTeX
@techreport{, TITLE = {On scheduling with bounded migration}, AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin}, LANGUAGE = {eng}, NUMBER = {MPI-I-2004-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004-05}, TYPE = {Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report}, EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}}, }
Endnote
%0 Report %A Sivadasan, Naveen %A Sanders, Peter %A Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On scheduling with bounded migration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-000F-28F9-D %F EDOC: 237877 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2004 %P 22 p. %B Max-Planck-Institut f&#252;r Informatik <Saarbr&#252;cken>: Research Report
[52]
N. Sivadasan, P. Sanders, and M. Skutella, “Online scheduling with bounded migration,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2004-1-004, 2004.
Export
BibTeX
@techreport{, TITLE = {Online scheduling with bounded migration}, AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-004}, NUMBER = {MPI-I-2004-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2004}, DATE = {2004}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sivadasan, Naveen %A Sanders, Peter %A Skutella, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Online scheduling with bounded migration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-685F-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-004 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2004 %P 21 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
2003
[53]
E. Althaus, T. Polzin, and S. Daneshmand, “Improving linear programming approaches for the Steiner tree problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-004, 2003.
Abstract
We present two theoretically interesting and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these techniques on the solution of the largest benchmark instances ever solved.
Export
BibTeX
@techreport{MPI-I-2003-1-004, TITLE = {Improving linear programming approaches for the Steiner tree problem}, AUTHOR = {Althaus, Ernst and Polzin, Tobias and Daneshmand, Siavash}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We present two theoretically interesting and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these techniques on the solution of the largest benchmark instances ever solved.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Althaus, Ernst %A Polzin, Tobias %A Daneshmand, Siavash %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Improving linear programming approaches for the Steiner tree problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6BB9-F %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 19 p. %X We present two theoretically interesting and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these techniques on the solution of the largest benchmark instances ever solved. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[54]
R. Beier and B. Vöcking, “Random knapsack in expected polynomial time,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-003, 2003.
Abstract
In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly correlated ones.
Export
BibTeX
@techreport{, TITLE = {Random knapsack in expected polynomial time}, AUTHOR = {Beier, Ren{\'e} and V{\"o}cking, Berthold}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-003}, NUMBER = {MPI-I-2003-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly correlated ones.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Beier, Ren&#233; %A V&#246;cking, Berthold %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Random knapsack in expected polynomial time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6BBC-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-003 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 22 p. %X In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly correlated ones. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[55]
S. Chandran Leela and C. R. Subramanian, “Girth and treewidth,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-NWG2-001, 2003.
Export
BibTeX
@techreport{, TITLE = {Girth and treewidth}, AUTHOR = {Chandran Leela, Sunil and Subramanian, C. R.}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-NWG2-001}, NUMBER = {MPI-I-2003-NWG2-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Chandran Leela, Sunil %A Subramanian, C. R. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Girth and treewidth : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6868-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-NWG2-001 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 11 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[56]
B. Csaba, “On the Bollob’as -- Eldridge conjecture for bipartite graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-009, 2003.
Abstract
Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$.
Export
BibTeX
@techreport{Csaba2003, TITLE = {On the Bollob{\textbackslash}'as -- Eldridge conjecture for bipartite graphs}, AUTHOR = {Csaba, Bela}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-009}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Csaba, Bela %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Bollob\'as -- Eldridge conjecture for bipartite graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B3A-F %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 29 p. %X Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[57]
M. Dietzfelbinger and H. Tamaki, “On the probability of rendezvous in graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-006, 2003.
Abstract
In a simple graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\ge5$.
Export
BibTeX
@techreport{MPI-I-94-224, TITLE = {On the probability of rendezvous in graphs}, AUTHOR = {Dietzfelbinger, Martin and Tamaki, Hisao}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {In a simple graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\ge5$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Dietzfelbinger, Martin %A Tamaki, Hisao %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the probability of rendezvous in graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B83-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 30 p. %X In a simple graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\ge5$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[58]
M. Dietzfelbinger and P. Woelfel, “Almost random graphs with simple hash functions,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-005, 2003.
Abstract
We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials.
Export
BibTeX
@techreport{, TITLE = {Almost random graphs with simple hash functions}, AUTHOR = {Dietzfelbinger, Martin and Woelfel, Philipp}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-005}, NUMBER = {MPI-I-2003-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Dietzfelbinger, Martin %A Woelfel, Philipp %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Almost random graphs with simple hash functions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6BB3-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-005 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 23 p. %X We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[59]
E. Fogel, D. Halperin, R. Wein, M. Teillaud, E. Berberich, A. Eigenwillig, S. Hert, and L. Kettner, “Specification of the Traits Classes for CGAL Arrangements of Curves,” INRIA, Sophia-Antipolis, ECG-TR-241200-01, 2003.
Export
BibTeX
@techreport{ecg:fhw-stcca-03, TITLE = {Specification of the Traits Classes for {CGAL} Arrangements of Curves}, AUTHOR = {Fogel, Efi and Halperin, Dan and Wein, Ron and Teillaud, Monique and Berberich, Eric and Eigenwillig, Arno and Hert, Susan and Kettner, Lutz}, LANGUAGE = {eng}, NUMBER = {ECG-TR-241200-01}, INSTITUTION = {INRIA}, ADDRESS = {Sophia-Antipolis}, YEAR = {2003}, DATE = {2003}, TYPE = {Technical Report}, }
Endnote
%0 Report %A Fogel, Efi %A Halperin, Dan %A Wein, Ron %A Teillaud, Monique %A Berberich, Eric %A Eigenwillig, Arno %A Hert, Susan %A Kettner, Lutz %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Specification of the Traits Classes for CGAL Arrangements of Curves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0027-B4C6-5 %Y INRIA %C Sophia-Antipolis %D 2003 %B Technical Report
[60]
I. Katriel and S. Thiel, “Fast bound consistency for the global cardinality constraint,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-013, 2003.
Abstract
We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$ is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before.
Export
BibTeX
@techreport{, TITLE = {Fast bound consistency for the global cardinality constraint}, AUTHOR = {Katriel, Irit and Thiel, Sven}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-013}, NUMBER = {MPI-I-2003-1-013}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$ is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Katriel, Irit %A Thiel, Sven %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fast bound consistency for the global cardinality constraint : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B1F-D %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-013 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 30 p. %X We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$ is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[61]
A. Kovács, “Sum-Multicoloring on paths,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-015, 2003.
Abstract
The question, whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The result easily carries over to cycles.
Export
BibTeX
@techreport{, TITLE = {Sum-Multicoloring on paths}, AUTHOR = {Kov{\'a}cs, Annamaria}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015}, NUMBER = {MPI-I-2003-1-015}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {The question, whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The result easily carries over to cycles.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Kov&#225;cs, Annamaria %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sum-Multicoloring on paths : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B18-C %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 20 p. %X The question, whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The result easily carries over to cycles. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[62]
P. Krysta, A. Czumaj, and B. Vöcking, “Selfish traffic allocation for server farms,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-011, 2003.
Abstract
We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g., bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows.
Export
BibTeX
@techreport{, TITLE = {Selfish traffic allocation for server farms}, AUTHOR = {Krysta, Piotr and Czumaj, Artur and V{\"o}cking, Berthold}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-011}, NUMBER = {MPI-I-2003-1-011}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g., bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Krysta, Piotr %A Czumaj, Artur %A V&#246;cking, Berthold %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Selfish traffic allocation for server farms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B33-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-011 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 43 p. %X We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g., bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[63]
P. Krysta, P. Sanders, and B. Vöcking, “Scheduling and traffic allocation for tasks with bounded splittability,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-002, 2003.
Abstract
We investigate variants of the well studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study.
Export
BibTeX
@techreport{MPI-I-2003-1-002, TITLE = {Scheduling and traffic allocation for tasks with bounded splittability}, AUTHOR = {Krysta, Piotr and Sanders, Peter and V{\"o}cking, Berthold}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We investigate variants of the well studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Krysta, Piotr %A Sanders, Peter %A V&#246;cking, Berthold %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Scheduling and traffic allocation for tasks with bounded splittability : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6BD1-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 15 p. %X We investigate variants of the well studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[64]
P. Sanders and R. Dementiev, “Asynchronous parallel disk sorting,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-001, 2003.
Abstract
We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective.
Export
BibTeX
@techreport{MPI-I-2003-1-001, TITLE = {Asynchronous parallel disk sorting}, AUTHOR = {Sanders, Peter and Dementiev, Roman}, LANGUAGE = {eng}, NUMBER = {MPI-I-2003-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sanders, Peter %A Dementiev, Roman %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Asynchronous parallel disk sorting : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C80-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 22 p. %X We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[65]
P. Sanders, “Polynomial time algorithms for network information flow,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-008, 2003.
Abstract
The famous max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller.
Export
BibTeX
@techreport{, TITLE = {Polynomial time algorithms for network information flow}, AUTHOR = {Sanders, Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-008}, NUMBER = {MPI-I-2003-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {The famous max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sanders, Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Polynomial time algorithms for network information flow : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B4A-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-008 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 18 p. %X The famous max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[66]
G. Schäfer and S. Leonardi, “Cross-monotonic cost sharing methods for connected facility location games,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-017, 2003.
Abstract
We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs.
Export
BibTeX
@techreport{, TITLE = {Cross-monotonic cost sharing methods for connected facility location games}, AUTHOR = {Sch{\"a}fer, Guido and Leonardi, Stefano}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-017}, NUMBER = {MPI-I-2003-1-017}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sch&#228;fer, Guido %A Leonardi, Stefano %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Cross-monotonic cost sharing methods for connected facility location games : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B12-7 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-017 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 10 p. %X We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[67]
G. Schäfer and N. Sivadasan, “Topology matters: smoothed competitive analysis of metrical task systems,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-016, 2003.
Abstract
We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard deviation.
Export
BibTeX
@techreport{, TITLE = {Topology matters: smoothed competitive analysis of metrical task systems}, AUTHOR = {Sch{\"a}fer, Guido and Sivadasan, Naveen}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-016}, NUMBER = {MPI-I-2003-1-016}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard deviation.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sch&#228;fer, Guido %A Sivadasan, Naveen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Topology matters: smoothed competitive analysis of metrical task systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B15-1 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-016 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 28 p. %X We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard deviation. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[68]
G. Schäfer, “A note on the smoothed complexity of the single-source shortest path problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-018, 2003.
Abstract
Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some \emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$ with high probability if the random replacements are chosen independently.
Export
BibTeX
@techreport{, TITLE = {A note on the smoothed complexity of the single-source shortest path problem}, AUTHOR = {Sch{\"a}fer, Guido}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-018}, NUMBER = {MPI-I-2003-1-018}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some \emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$ with high probability if the random replacements are chosen independently.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sch&#228;fer, Guido %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A note on the smoothed complexity of the single-source shortest path problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B0D-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-018 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 8 p. %X Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some \emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$ with high probability if the random replacements are chosen independently. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[69]
G. Schäfer, L. Becchetti, S. Leonardi, A. Marchetti-Spaccamela, and T. Vredeveld, “Average case and smoothed competitive analysis of the multi-level feedback algorithm,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-014, 2003.
Abstract
In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution.
Export
BibTeX
@techreport{, TITLE = {Average case and smoothed competitive analysis of the multi-level feedback algorithm}, AUTHOR = {Sch{\"a}fer, Guido and Becchetti, Luca and Leonardi, Stefano and Marchetti-Spaccamela, Alberto and Vredeveld, Tjark}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014}, NUMBER = {MPI-I-2003-1-014}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sch&#228;fer, Guido %A Becchetti, Luca %A Leonardi, Stefano %A Marchetti-Spaccamela, Alberto %A Vredeveld, Tjark %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Average case and smoothed competitive analysis of the multi-level feedback algorithm : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B1C-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 31 p. %X In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[70]
S. Schmitt, “The Diamond Operator for Real Algebraic Numbers,” Effective Computational Geometry for Curves and Surfaces, Sophia Antipolis, FRANCE, ECG-TR-243107-01, 2003.
Abstract
Real algebraic numbers are real roots of polynomials with integral coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k, or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in a LEDA extension package.
Export
BibTeX
@techreport{s-doran-03, TITLE = {The Diamond Operator for Real Algebraic Numbers}, AUTHOR = {Schmitt, Susanne}, LANGUAGE = {eng}, NUMBER = {ECG-TR-243107-01}, INSTITUTION = {Effective Computational Geometry for Curves and Surfaces}, ADDRESS = {Sophia Antipolis, FRANCE}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Real algebraic numbers are real roots of polynomials with integral coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k, or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in a LEDA extension package.}, }
Endnote
%0 Report %A Schmitt, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The Diamond Operator for Real Algebraic Numbers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-EBB1-B %Y Effective Computational Geometry for Curves and Surfaces %C Sophia Antipolis, FRANCE %D 2003 %X Real algebraic numbers are real roots of polynomials with integral coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k, or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in a LEDA extension package.
[71]
H. Tamaki, “A linear time heuristic for the branch-decomposition of planar graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-010, 2003.
Abstract
Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e., if $x_{i - 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$ denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width.
Export
BibTeX
@techreport{, TITLE = {A linear time heuristic for the branch-decomposition of planar graphs}, AUTHOR = {Tamaki, Hisao}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-010}, NUMBER = {MPI-I-2003-1-010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e., if $x_{i -- 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i -- 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$ denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Tamaki, Hisao %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A linear time heuristic for the branch-decomposition of planar graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B37-6 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-010 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 18 p. %X Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e., if $x_{i - 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$ denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[72]
H. Tamaki, “Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2003-1-007, 2003.
Abstract
A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB.
Export
BibTeX
@techreport{, TITLE = {Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem}, AUTHOR = {Tamaki, Hisao}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-007}, NUMBER = {MPI-I-2003-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2003}, DATE = {2003}, ABSTRACT = {A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Tamaki, Hisao %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6B66-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-007 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2003 %P 22 p. %X A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB. %B Research Report / Max-Planck-Institut f&#252;r Informatik
2002
[73]
N. Beldiceanu, M. Carlsson, and S. Thiel, “Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint,” Swedish Institute of Computer Science, Kista, SICS-T-2002:14-SE, 2002.
Abstract
This article introduces the sum of weights of distinct values constraint, which can be seen as a generalization of the number of distinct values as well as of the alldifferent, and the relaxed alldifferent constraints. This constraint holds if a cost variable is equal to the sum of the weights associated to the distinct values taken by a given set of variables. For the first aspect, which is related to domination, we present four filtering algorithms. Two of them lead to perfect pruning when each domain variable consists of one set of consecutive values, while the two others take advantage of holes in the domains. For the second aspect, which is connected to maximum matching in a bipartite graph, we provide a complete filtering algorithm for the general case. Finally we introduce several generic deduction rules, which link both aspects of the constraint. These rules can be applied to other optimization constraints such as the minimum weight alldifferent constraint or the global cardinality constraint with costs. They also allow taking into account external constraints for getting enhanced bounds for the cost variable. In practice, the sum of weights of distinct values constraint occurs in assignment problems where using a resource once or several times costs the same. It also captures domination problems where one has to select a set of vertices in order to control every vertex of a graph.
Export
BibTeX
@techreport{BCT2002:SumOfWeights, TITLE = {Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint}, AUTHOR = {Beldiceanu, Nicolas and Carlsson, Mats and Thiel, Sven}, LANGUAGE = {eng}, ISSN = {1100-3154}, NUMBER = {SICS-T-2002:14-SE}, INSTITUTION = {Swedish Institute of Computer Science}, ADDRESS = {Kista}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {This article introduces the sum of weights of distinct values constraint, which can be seen as a generalization of the number of distinct values as well as of the alldifferent, and the relaxed alldifferent constraints. This constraint holds if a cost variable is equal to the sum of the weights associated to the distinct values taken by a given set of variables. For the first aspect, which is related to domination, we present four filtering algorithms. Two of them lead to perfect pruning when each domain variable consists of one set of consecutive values, while the two others take advantage of holes in the domains. For the second aspect, which is connected to maximum matching in a bipartite graph, we provide a complete filtering algorithm for the general case. Finally we introduce several generic deduction rules, which link both aspects of the constraint. These rules can be applied to other optimization constraints such as the minimum weight alldifferent constraint or the global cardinality constraint with costs. They also allow taking into account external constraints for getting enhanced bounds for the cost variable. In practice, the sum of weights of distinct values constraint occurs in assignment problems where using a resource once or several times costs the same. It also captures domination problems where one has to select a set of vertices in order to control every vertex of a graph.}, TYPE = {SICS Technical Report}, }
Endnote
%0 Report %A Beldiceanu, Nicolas %A Carlsson, Mats %A Thiel, Sven %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-EBAD-A %Y Swedish Institute of Computer Science %C Uppsala, Sweden %D 2002 %X This article introduces the sum of weights of distinct values constraint, which can be seen as a generalization of the number of distinct values as well as of the alldifferent, and the relaxed alldifferent constraints. This constraint holds if a cost variable is equal to the sum of the weights associated to the distinct values taken by a given set of variables. For the first aspect, which is related to domination, we present four filtering algorithms. Two of them lead to perfect pruning when each domain variable consists of one set of consecutive values, while the two others take advantage of holes in the domains. For the second aspect, which is connected to maximum matching in a bipartite graph, we provide a complete filtering algorithm for the general case. Finally we introduce several generic deduction rules, which link both aspects of the constraint. These rules can be applied to other optimization constraints such as the minimum weight alldifferent constraint or the global cardinality constraint with costs. They also allow taking into account external constraints for getting enhanced bounds for the cost variable. In practice, the sum of weights of distinct values constraint occurs in assignment problems where using a resource once or several times costs the same. It also captures domination problems where one has to select a set of vertices in order to control every vertex of a graph. %B SICS Technical Report %@ false
[74]
A. Eigenwillig, E. Schömer, and N. Wolpert, “Sweeping Arrangements of Cubic Segments Exactly and Efficiently,” Effective Computational Geometry for Curves and Surfaces, Sophia Antipolis, ECG-TR-182202-01, 2002.
Abstract
A method is presented to compute the planar arrangement induced by segments of algebraic curves of degree three (or less), using an improved Bentley-Ottmann sweep-line algorithm. Our method is exact (it provides the mathematically correct result), complete (it handles all possible geometric degeneracies), and efficient (the implementation can handle hundreds of segments). The range of possible input segments comprises conic arcs and cubic splines as special cases of particular practical importance.
Export
BibTeX
@techreport{esw-sacsee-02, TITLE = {Sweeping Arrangements of Cubic Segments Exactly and Efficiently}, AUTHOR = {Eigenwillig, Arno and Sch{\"o}mer, Elmar and Wolpert, Nicola}, LANGUAGE = {eng}, NUMBER = {ECG-TR-182202-01}, INSTITUTION = {Effective Computational Geometry for Curves and Surfaces}, ADDRESS = {Sophia Antipolis}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {A method is presented to compute the planar arrangement induced by segments of algebraic curves of degree three (or less), using an improved Bentley-Ottmann sweep-line algorithm. Our method is exact (it provides the mathematically correct result), complete (it handles all possible geometric degeneracies), and efficient (the implementation can handle hundreds of segments). The range of possible input segments comprises conic arcs and cubic splines as special cases of particular practical importance.}, }
Endnote
%0 Report %A Eigenwillig, Arno %A Sch&#246;mer, Elmar %A Wolpert, Nicola %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sweeping Arrangements of Cubic Segments Exactly and Efficiently : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-EB42-7 %Y Effective Computational Geometry for Curves and Surfaces %C Sophia Antipolis %D 2002 %X A method is presented to compute the planar arrangement induced by segments of algebraic curves of degree three (or less), using an improved Bentley-Ottmann sweep-line algorithm. Our method is exact (it provides the mathematically correct result), complete (it handles all possible geometric degeneracies), and efficient (the implementation can handle hundreds of segments). The range of possible input segments comprises conic arcs and cubic splines as special cases of particular practical importance.
[75]
S. Hert, T. Polzin, L. Kettner, and G. Schäfer, “Exp Lab: a tool set for computational experiments,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2002-1-004, 2002.
Abstract
We describe a set of tools that support the running, documentation, and evaluation of computational experiments. The tool set is designed not only to make computational experimentation easier but also to support good scientific practice by making results reproducable and more easily comparable to others' results by automatically documenting the experimental environment. The tools can be used separately or in concert and support all manner of experiments (\textit{i.e.}, any executable can be an experiment). The tools capitalize on the rich functionality available in Python to provide extreme flexibility and ease of use, but one need know nothing of Python to use the tools.
Export
BibTeX
@techreport{MPI-I-2002-1-004, TITLE = {Exp Lab: a tool set for computational experiments}, AUTHOR = {Hert, Susan and Polzin, Tobias and Kettner, Lutz and Sch{\"a}fer, Guido}, LANGUAGE = {eng}, NUMBER = {MPI-I-2002-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {We describe a set of tools that support the running, documentation, and evaluation of computational experiments. The tool set is designed not only to make computational experimentation easier but also to support good scientific practice by making results reproducable and more easily comparable to others' results by automatically documenting the experimental environment. The tools can be used separately or in concert and support all manner of experiments (\textit{i.e.}, any executable can be an experiment). The tools capitalize on the rich functionality available in Python to provide extreme flexibility and ease of use, but one need know nothing of Python to use the tools.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Hert, Susan %A Polzin, Tobias %A Kettner, Lutz %A Sch&#228;fer, Guido %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Exp Lab: a tool set for computational experiments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C95-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2002 %P 59 p. %X We describe a set of tools that support the running, documentation, and evaluation of computational experiments. The tool set is designed not only to make computational experimentation easier but also to support good scientific practice by making results reproducable and more easily comparable to others' results by automatically documenting the experimental environment. The tools can be used separately or in concert and support all manner of experiments (\textit{i.e.}, any executable can be an experiment). The tools capitalize on the rich functionality available in Python to provide extreme flexibility and ease of use, but one need know nothing of Python to use the tools. %B Research Report
[76]
M. Hoefer, “Performance of heuristic and approximation algorithms for the uncapacitated facility location problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2002-1-005, 2002.
Abstract
The uncapacitated facility location problem (UFLP) is a problem that has been studied intensively in operational research. Recently a variety of new deterministic and heuristic approximation algorithms have evolved. In this paper, we compare five new approaches to this problem - the JMS- and the MYZ-approximation algorithms, a version of local search, a Tabu Search algorithm as well as a version of the Volume algorithm with randomized rounding. We compare solution quality and running times on different standard benchmark instances. With these instances and additional material a web page was set up, where the material used in this study is accessible.
Export
BibTeX
@techreport{MPI-I-2002-1-005, TITLE = {Performance of heuristic and approximation algorithms for the uncapacitated facility location problem}, AUTHOR = {Hoefer, Martin}, LANGUAGE = {eng}, NUMBER = {MPI-I-2002-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {The uncapacitated facility location problem (UFLP) is a problem that has been studied intensively in operational research. Recently a variety of new deterministic and heuristic approximation algorithms have evolved. In this paper, we compare five new approaches to this problem -- the JMS- and the MYZ-approximation algorithms, a version of local search, a Tabu Search algorithm as well as a version of the Volume algorithm with randomized rounding. We compare solution quality and running times on different standard benchmark instances. With these instances and additional material a web page was set up, where the material used in this study is accessible.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Hoefer, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Performance of heuristic and approximation algorithms for the uncapacitated facility location problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C92-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2002 %P 27 p. %X The uncapacitated facility location problem (UFLP) is a problem that has been studied intensively in operational research. Recently a variety of new deterministic and heuristic approximation algorithms have evolved. In this paper, we compare five new approaches to this problem - the JMS- and the MYZ-approximation algorithms, a version of local search, a Tabu Search algorithm as well as a version of the Volume algorithm with randomized rounding. We compare solution quality and running times on different standard benchmark instances. With these instances and additional material a web page was set up, where the material used in this study is accessible. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[77]
I. Katriel, P. Sanders, and J. L. Träff, “A practical minimum spanning tree algorithm using the cycle property,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2002-1-003, 2002.
Abstract
We present a simple new algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, ``difficult'' inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be discarded. Previously this has only been exploited in asymptotically optimal algorithms that are considered to be impractical. An additional advantage is that the algorithm can greatly profit from pipelined memory access. Hence, an implementation on a vector machine is up to 13 times faster than previous algorithms. We outline additional refinements for MSTs of implicitly defined graphs and the use of the central data structure for querying the heaviest edge between two nodes in the MST. The latter result is also interesting for sparse graphs.
Export
BibTeX
@techreport{MPI-I-2002-1-003, TITLE = {A practical minimum spanning tree algorithm using the cycle property}, AUTHOR = {Katriel, Irit and Sanders, Peter and Tr{\"a}ff, Jesper Larsson}, LANGUAGE = {eng}, NUMBER = {MPI-I-2002-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {We present a simple new algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, ``difficult'' inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be discarded. Previously this has only been exploited in asymptotically optimal algorithms that are considered to be impractical. An additional advantage is that the algorithm can greatly profit from pipelined memory access. Hence, an implementation on a vector machine is up to 13 times faster than previous algorithms. We outline additional refinements for MSTs of implicitly defined graphs and the use of the central data structure for querying the heaviest edge between two nodes in the MST. The latter result is also interesting for sparse graphs.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Katriel, Irit %A Sanders, Peter %A Tr&#228;ff, Jesper Larsson %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A practical minimum spanning tree algorithm using the cycle property : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C98-2 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2002 %P 21 p. %X We present a simple new algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, ``difficult'' inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be discarded. Previously this has only been exploited in asymptotically optimal algorithms that are considered to be impractical. An additional advantage is that the algorithm can greatly profit from pipelined memory access. Hence, an implementation on a vector machine is up to 13 times faster than previous algorithms. We outline additional refinements for MSTs of implicitly defined graphs and the use of the central data structure for querying the heaviest edge between two nodes in the MST. The latter result is also interesting for sparse graphs. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[78]
T. Polzin and S. Vahdati, “Using (sub)graphs of small width for solving the Steiner problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2002-1-001, 2002.
Abstract
For the Steiner tree problem in networks, we present a practical algorithm that uses the fixed-parameter tractability of the problem with respect to a certain width parameter closely related to pathwidth. The running time of the algorithm is linear in the number of vertices when the pathwidth is constant. Combining this algorithm with our previous techniques, we can already profit from small width in subgraphs of an instance. Integrating this algorithm into our program package for the Steiner problem accelerates the solution process on some groups of instances and leads to a fast solution of some previously unsolved benchmark instances.
Export
BibTeX
@techreport{, TITLE = {Using (sub)graphs of small width for solving the Steiner problem}, AUTHOR = {Polzin, Tobias and Vahdati, Siavash}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-1-001}, NUMBER = {MPI-I-2002-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {For the Steiner tree problem in networks, we present a practical algorithm that uses the fixed-parameter tractability of the problem with respect to a certain width parameter closely related to pathwidth. The running time of the algorithm is linear in the number of vertices when the pathwidth is constant. Combining this algorithm with our previous techniques, we can already profit from small width in subgraphs of an instance. Integrating this algorithm into our program package for the Steiner problem accelerates the solution process on some groups of instances and leads to a fast solution of some previously unsolved benchmark instances.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Polzin, Tobias %A Vahdati, Siavash %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Using (sub)graphs of small width for solving the Steiner problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C9E-5 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-1-001 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2002 %P 9 p. %X For the Steiner tree problem in networks, we present a practical algorithm that uses the fixed-parameter tractability of the problem with respect to a certain width parameter closely related to pathwidth. The running time of the algorithm is linear in the number of vertices when the pathwidth is constant. Combining this algorithm with our previous techniques, we can already profit from small width in subgraphs of an instance. Integrating this algorithm into our program package for the Steiner problem accelerates the solution process on some groups of instances and leads to a fast solution of some previously unsolved benchmark instances. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[79]
P. Sanders and J. L. Träff, “The factor algorithm for all-to-all communication on clusters of SMP nodes,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2002-1-008, 2002.
Abstract
We present an algorithm for all-to-all personalized communication, in which every processor has an individual message to deliver to every other processor. The machine model we consider is a cluster of processing nodes where each node, possibly consisting of several processors, can participate in only one communication operation with another node at a time. The nodes may have different numbers of processors. This general model is important for the implementation of all-to-all communication in libraries such as MPI where collective communication may take place over arbitrary subsets of processors. The algorithm is simple and optimal up to an additive term that is small if the total number of processors is large compared to the maximal number of processors in a node.
Export
BibTeX
@techreport{MPI-I-2002-1-008, TITLE = {The factor algorithm for all-to-all communication on clusters of {SMP} nodes}, AUTHOR = {Sanders, Peter and Tr{\"a}ff, Jesper Larsson}, LANGUAGE = {eng}, NUMBER = {MPI-I-2002-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2002}, DATE = {2002}, ABSTRACT = {We present an algorithm for all-to-all personalized communication, in which every processor has an individual message to deliver to every other processor. The machine model we consider is a cluster of processing nodes where each node, possibly consisting of several processors, can participate in only one communication operation with another node at a time. The nodes may have different numbers of processors. This general model is important for the implementation of all-to-all communication in libraries such as MPI where collective communication may take place over arbitrary subsets of processors. The algorithm is simple and optimal up to an additive term that is small if the total number of processors is large compared to the maximal number of processors in a node.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sanders, Peter %A Tr&#228;ff, Jesper Larsson %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The factor algorithm for all-to-all communication on clusters of SMP nodes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6C8F-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2002 %P 8 p. %X We present an algorithm for all-to-all personalized communication, in which every processor has an individual message to deliver to every other processor. The machine model we consider is a cluster of processing nodes where each node, possibly consisting of several processors, can participate in only one communication operation with another node at a time. The nodes may have different numbers of processors. This general model is important for the implementation of all-to-all communication in libraries such as MPI where collective communication may take place over arbitrary subsets of processors. The algorithm is simple and optimal up to an additive term that is small if the total number of processors is large compared to the maximal number of processors in a node. %B Research Report / Max-Planck-Institut f&#252;r Informatik
2001
[80]
B. Csaba and S. Lodha, “A Randomized On-line Algorithm for the k-Server Problem on a Line,” DIMACS-Center for Discrete Mathematics & Theoretical Computer Science, Piscataway, NJ, DIMACS TechReport 2001-34, 2001.
Abstract
We give a O(n^2 \over 3}\log{n})-competitive randomized k--server algorithm when the underlying metric space is given by n equally spaced points on a line. For n = k + o(k^{3 \over 2}/\log{k), this algorithm is o(k)--competitive.
Export
BibTeX
@techreport{Csaba2001, TITLE = {A Randomized On-line Algorithm for the k-Server Problem on a Line}, AUTHOR = {Csaba, Bela and Lodha, Sachin}, LANGUAGE = {eng}, NUMBER = {DIMACS TechReport 2001-34}, INSTITUTION = {DIMACS-Center for Discrete Mathematics \& Theoretical Computer Science}, ADDRESS = {Piscataway, NJ}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {We give a O(n^2 \over 3}\log{n})-competitive randomized k--server algorithm when the underlying metric space is given by n equally spaced points on a line. For n = k + o(k^{3 \over 2}/\log{k), this algorithm is o(k)--competitive.}, }
Endnote
%0 Report %A Csaba, Bela %A Lodha, Sachin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T A Randomized On-line Algorithm for the k-Server Problem on a Line : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-EBC5-0 %Y DIMACS-Center for Discrete Mathematics & Theoretical Computer Science %C Piscataway, NJ %D 2001 %X We give a O(n^2 \over 3}\log{n})-competitive randomized k--server algorithm when the underlying metric space is given by n equally spaced points on a line. For n = k + o(k^{3 \over 2}/\log{k), this algorithm is o(k)--competitive.
[81]
S. Hert, M. Hoffmann, L. Kettner, S. Pion, and M. Seel, “An adaptable and extensible geometry kernel,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2001-1-004, 2001.
Abstract
Geometric algorithms are based on geometric objects such as points, lines and circles. The term \textit{kernel\/} refers to a collection of representations for constant-size geometric objects and operations on these representations. This paper describes how such a geometry kernel can be designed and implemented in C++, having special emphasis on adaptability, extensibility and efficiency. We achieve these goals following the generic programming paradigm and using templates as our tools. These ideas are realized and tested in \cgal~\cite{svy-cgal}, the Computational Geometry Algorithms Library.
Export
BibTeX
@techreport{, TITLE = {An adaptable and extensible geometry kernel}, AUTHOR = {Hert, Susan and Hoffmann, Michael and Kettner, Lutz and Pion, Sylvain and Seel, Michael}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-004}, NUMBER = {MPI-I-2001-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {Geometric algorithms are based on geometric objects such as points, lines and circles. The term \textit{kernel\/} refers to a collection of representations for constant-size geometric objects and operations on these representations. This paper describes how such a geometry kernel can be designed and implemented in C++, having special emphasis on adaptability, extensibility and efficiency. We achieve these goals following the generic programming paradigm and using templates as our tools. These ideas are realized and tested in \cgal~\cite{svy-cgal}, the Computational Geometry Algorithms Library.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Hert, Susan %A Hoffmann, Michael %A Kettner, Lutz %A Pion, Sylvain %A Seel, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T An adaptable and extensible geometry kernel : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D22-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-004 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2001 %P 27 p. %X Geometric algorithms are based on geometric objects such as points, lines and circles. The term \textit{kernel\/} refers to a collection of representations for constant-size geometric objects and operations on these representations. This paper describes how such a geometry kernel can be designed and implemented in C++, having special emphasis on adaptability, extensibility and efficiency. We achieve these goals following the generic programming paradigm and using templates as our tools. These ideas are realized and tested in \cgal~\cite{svy-cgal}, the Computational Geometry Algorithms Library. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[82]
P. Krysta, “Approximating minimum size 1,2-connected networks,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2001-1-001, 2001.
Abstract
The problem of finding the minimum size 2-connected subgraph is a classical problem in network design. It is known to be NP-hard even on cubic planar graphs and Max-SNP hard. We study the generalization of this problem, where requirements of 1 or 2 edge or vertex disjoint paths are specified between every pair of vertices, and the aim is to find a minimum subgraph satisfying these requirements. For both problems we give $3/2$-approximation algorithms. This improves on the straightforward $2$-approximation algorithms for these problems, and generalizes earlier results for 2-connectivity. We also give analyses of the classical local optimization heuristics for these two network design problems.
Export
BibTeX
@techreport{, TITLE = {Approximating minimum size 1,2-connected networks}, AUTHOR = {Krysta, Piotr}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-001}, NUMBER = {MPI-I-2001-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {The problem of finding the minimum size 2-connected subgraph is a classical problem in network design. It is known to be NP-hard even on cubic planar graphs and Max-SNP hard. We study the generalization of this problem, where requirements of 1 or 2 edge or vertex disjoint paths are specified between every pair of vertices, and the aim is to find a minimum subgraph satisfying these requirements. For both problems we give $3/2$-approximation algorithms. This improves on the straightforward $2$-approximation algorithms for these problems, and generalizes earlier results for 2-connectivity. We also give analyses of the classical local optimization heuristics for these two network design problems.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Krysta, Piotr %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximating minimum size 1,2-connected networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D47-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-001 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2001 %P 22 p. %X The problem of finding the minimum size 2-connected subgraph is a classical problem in network design. It is known to be NP-hard even on cubic planar graphs and Max-SNP hard. We study the generalization of this problem, where requirements of 1 or 2 edge or vertex disjoint paths are specified between every pair of vertices, and the aim is to find a minimum subgraph satisfying these requirements. For both problems we give $3/2$-approximation algorithms. This improves on the straightforward $2$-approximation algorithms for these problems, and generalizes earlier results for 2-connectivity. We also give analyses of the classical local optimization heuristics for these two network design problems. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[83]
U. Meyer, “Directed single-source shortest-paths in linear average-case time,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2001-1-002, 2001.
Abstract
The quest for a linear-time single-source shortest-path (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an ${\cal O}(n+m)$ time RAM algorithm for undirected graphs with $n$ nodes, $m$ edges and integer edge weights in $\{0,\ldots, 2^w-1\}$ where $w$ denotes the word length, the currently best time bound for directed sparse graphs on a RAM is ${\cal O}(n+m \cdot \log\log n)$. In the present paper we study the average-case complexity of SSSP. We give simple label-setting and label-correcting algorithms for arbitrary directed graphs with random real edge weights uniformly distributed in $\left[0,1\right]$ and show that they need linear time ${\cal O}(n+m)$ with high probability. A variant of the label-correcting approach also supports parallelization. Furthermore, we propose a general method to construct graphs with random edge weights which incur large non-linear expected running times on many traditional shortest-path algorithms.
Export
BibTeX
@techreport{, TITLE = {Directed single-source shortest-paths in linear average-case time}, AUTHOR = {Meyer, Ulrich}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-002}, NUMBER = {MPI-I-2001-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {The quest for a linear-time single-source shortest-path (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an ${\cal O}(n+m)$ time RAM algorithm for undirected graphs with $n$ nodes, $m$ edges and integer edge weights in $\{0,\ldots, 2^w-1\}$ where $w$ denotes the word length, the currently best time bound for directed sparse graphs on a RAM is ${\cal O}(n+m \cdot \log\log n)$. In the present paper we study the average-case complexity of SSSP. We give simple label-setting and label-correcting algorithms for arbitrary directed graphs with random real edge weights uniformly distributed in $\left[0,1\right]$ and show that they need linear time ${\cal O}(n+m)$ with high probability. A variant of the label-correcting approach also supports parallelization. Furthermore, we propose a general method to construct graphs with random edge weights which incur large non-linear expected running times on many traditional shortest-path algorithms.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Meyer, Ulrich %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Directed single-source shortest-paths in linear average-case time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D44-5 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-002 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2001 %P 32 p. %X The quest for a linear-time single-source shortest-path (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an ${\cal O}(n+m)$ time RAM algorithm for undirected graphs with $n$ nodes, $m$ edges and integer edge weights in $\{0,\ldots, 2^w-1\}$ where $w$ denotes the word length, the currently best time bound for directed sparse graphs on a RAM is ${\cal O}(n+m \cdot \log\log n)$. In the present paper we study the average-case complexity of SSSP. We give simple label-setting and label-correcting algorithms for arbitrary directed graphs with random real edge weights uniformly distributed in $\left[0,1\right]$ and show that they need linear time ${\cal O}(n+m)$ with high probability. A variant of the label-correcting approach also supports parallelization. Furthermore, we propose a general method to construct graphs with random edge weights which incur large non-linear expected running times on many traditional shortest-path algorithms. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[84]
T. Polzin and S. Vahdati, “Extending reduction techniques for the Steiner tree problem: a combination of alternative-and bound-based approaches,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2001-1-007, 2001.
Abstract
A key ingredient of the most successful algorithms for the Steiner problem are reduction methods, i.e. methods to reduce the size of a given instance while preserving at least one optimal solution (or the ability to efficiently reconstruct one). While classical reduction tests just inspected simple patterns (vertices or edges), recent and more sophisticated tests extend the scope of inspection to more general patterns (like trees). In this paper, we present such an extended reduction test, which generalizes different tests in the literature. We use the new approach of combining alternative- and bound-based methods, which substantially improves the impact of the tests. We also present several algorithmic improvements, especially for the computation of the needed information. The experimental results show a substantial improvement over previous methods using the idea of extension.
Export
BibTeX
@techreport{MPI-I-2001-1-007, TITLE = {Extending reduction techniques for the Steiner tree problem: a combination of alternative-and bound-based approaches}, AUTHOR = {Polzin, Tobias and Vahdati, Siavash}, LANGUAGE = {eng}, NUMBER = {MPI-I-2001-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {A key ingredient of the most successful algorithms for the Steiner problem are reduction methods, i.e. methods to reduce the size of a given instance while preserving at least one optimal solution (or the ability to efficiently reconstruct one). While classical reduction tests just inspected simple patterns (vertices or edges), recent and more sophisticated tests extend the scope of inspection to more general patterns (like trees). In this paper, we present such an extended reduction test, which generalizes different tests in the literature. We use the new approach of combining alternative- and bound-based methods, which substantially improves the impact of the tests. We also present several algorithmic improvements, especially for the computation of the needed information. The experimental results show a substantial improvement over previous methods using the idea of extension.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Polzin, Tobias %A Vahdati, Siavash %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Extending reduction techniques for the Steiner tree problem: a combination of alternative-and bound-based approaches : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D16-F %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2001 %P 24 p. %X A key ingredient of the most successful algorithms for the Steiner problem are reduction methods, i.e. methods to reduce the size of a given instance while preserving at least one optimal solution (or the ability to efficiently reconstruct one). While classical reduction tests just inspected simple patterns (vertices or edges), recent and more sophisticated tests extend the scope of inspection to more general patterns (like trees). In this paper, we present such an extended reduction test, which generalizes different tests in the literature. We use the new approach of combining alternative- and bound-based methods, which substantially improves the impact of the tests. We also present several algorithmic improvements, especially for the computation of the needed information. The experimental results show a substantial improvement over previous methods using the idea of extension. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[85]
T. Polzin and S. Vahdati, “Partitioning techniques for the Steiner problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2001-1-006, 2001.
Abstract
Partitioning is one of the basic ideas for designing efficient algorithms, but on \NP-hard problems like the Steiner problem straightforward application of the classical paradigms for exploiting this idea rarely leads to empirically successful algorithms. In this paper, we present a new approach which is based on vertex separators. We show several contexts in which this approach can be used profitably. Our approach is new in the sense that it uses partitioning to design reduction methods. We introduce two such methods; and show their impact empirically.
Export
BibTeX
@techreport{, TITLE = {Partitioning techniques for the Steiner problem}, AUTHOR = {Polzin, Tobias and Vahdati, Siavash}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-006}, NUMBER = {MPI-I-2001-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {Partitioning is one of the basic ideas for designing efficient algorithms, but on \NP-hard problems like the Steiner problem straightforward application of the classical paradigms for exploiting this idea rarely leads to empirically successful algorithms. In this paper, we present a new approach which is based on vertex separators. We show several contexts in which this approach can be used profitably. Our approach is new in the sense that it uses partitioning to design reduction methods. We introduce two such methods; and show their impact empirically.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Polzin, Tobias %A Vahdati, Siavash %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Partitioning techniques for the Steiner problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D19-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-006 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2001 %P 21 p. %X Partitioning is one of the basic ideas for designing efficient algorithms, but on \NP-hard problems like the Steiner problem straightforward application of the classical paradigms for exploiting this idea rarely leads to empirically successful algorithms. In this paper, we present a new approach which is based on vertex separators. We show several contexts in which this approach can be used profitably. Our approach is new in the sense that it uses partitioning to design reduction methods. We introduce two such methods; and show their impact empirically. %B Research Report
[86]
T. Polzin and S. Vahdati, “On Steiner trees and minimum spanning trees in hypergraphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2001-1-005, 2001.
Abstract
The state-of-the-art algorithms for geometric Steiner problems use a two-phase approach based on full Steiner trees (FSTs). The bottleneck of this approach is the second phase (FST concatenation phase), in which an optimum Steiner tree is constructed out of the FSTs generated in the first phase. The hitherto most successful algorithm for this phase considers the FSTs as edges of a hypergraph and is based on an LP-relaxation of the minimum spanning tree in hypergraph (MSTH) problem. In this paper, we compare this original and some new relaxations of this problem and show their equivalence, and thereby refute a conjecture in the literature. Since the second phase can also be formulated as a Steiner problem in graphs, we clarify the relation of this MSTH-relaxation to all classical relaxations of the Steiner problem. Finally, we perform some experiments, both on the quality of the relaxations and on FST-concatenation methods based on them, leading to the surprising result that an algorithm of ours which is designed for general graphs is superior to the MSTH-approach.
Export
BibTeX
@techreport{, TITLE = {On Steiner trees and minimum spanning trees in hypergraphs}, AUTHOR = {Polzin, Tobias and Vahdati, Siavash}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-005}, NUMBER = {MPI-I-2001-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {The state-of-the-art algorithms for geometric Steiner problems use a two-phase approach based on full Steiner trees (FSTs). The bottleneck of this approach is the second phase (FST concatenation phase), in which an optimum Steiner tree is constructed out of the FSTs generated in the first phase. The hitherto most successful algorithm for this phase considers the FSTs as edges of a hypergraph and is based on an LP-relaxation of the minimum spanning tree in hypergraph (MSTH) problem. In this paper, we compare this original and some new relaxations of this problem and show their equivalence, and thereby refute a conjecture in the literature. Since the second phase can also be formulated as a Steiner problem in graphs, we clarify the relation of this MSTH-relaxation to all classical relaxations of the Steiner problem. Finally, we perform some experiments, both on the quality of the relaxations and on FST-concatenation methods based on them, leading to the surprising result that an algorithm of ours which is designed for general graphs is superior to the MSTH-approach.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Polzin, Tobias %A Vahdati, Siavash %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Steiner trees and minimum spanning trees in hypergraphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D1F-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-005 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2001 %P 15 p. %X The state-of-the-art algorithms for geometric Steiner problems use a two-phase approach based on full Steiner trees (FSTs). The bottleneck of this approach is the second phase (FST concatenation phase), in which an optimum Steiner tree is constructed out of the FSTs generated in the first phase. The hitherto most successful algorithm for this phase considers the FSTs as edges of a hypergraph and is based on an LP-relaxation of the minimum spanning tree in hypergraph (MSTH) problem. In this paper, we compare this original and some new relaxations of this problem and show their equivalence, and thereby refute a conjecture in the literature. Since the second phase can also be formulated as a Steiner problem in graphs, we clarify the relation of this MSTH-relaxation to all classical relaxations of the Steiner problem. Finally, we perform some experiments, both on the quality of the relaxations and on FST-concatenation methods based on them, leading to the surprising result that an algorithm of ours which is designed for general graphs is superior to the MSTH-approach. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[87]
M. Seel, “Implementation of planar Nef polyhedra,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2001-1-003, 2001.
Abstract
A planar Nef polyhedron is any set that can be obtained from the open half-space by a finite number of set complement and set intersection operations. The set of Nef polyhedra is closed under the Boolean set operations. We describe a date structure that realizes two-dimensional Nef polyhedra and offers a large set of binary and unary set operations. The underlying set operations are realized by an efficient and complete algorithm for the overlay of two Nef polyhedra. The algorithm is efficient in the sense that its running time is bounded by the size of the inputs plus the size of the output times a logarithmic factor. The algorithm is complete in the sense that it can handle all inputs and requires no general position assumption. The seecond part of the algorithmic interface considers point location and ray shooting in planar subdivisions. The implementation follows the generic programming paradigm in C++ and CGAL. Several concept interfaces are defined that allow the adaptation of the software by the means of traits classes. The described project is part of the CGAL libarary version 2.3.
Export
BibTeX
@techreport{, TITLE = {Implementation of planar Nef polyhedra}, AUTHOR = {Seel, Michael}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-003}, NUMBER = {MPI-I-2001-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2001}, DATE = {2001}, ABSTRACT = {A planar Nef polyhedron is any set that can be obtained from the open half-space by a finite number of set complement and set intersection operations. The set of Nef polyhedra is closed under the Boolean set operations. We describe a date structure that realizes two-dimensional Nef polyhedra and offers a large set of binary and unary set operations. The underlying set operations are realized by an efficient and complete algorithm for the overlay of two Nef polyhedra. The algorithm is efficient in the sense that its running time is bounded by the size of the inputs plus the size of the output times a logarithmic factor. The algorithm is complete in the sense that it can handle all inputs and requires no general position assumption. The seecond part of the algorithmic interface considers point location and ray shooting in planar subdivisions. The implementation follows the generic programming paradigm in C++ and CGAL. Several concept interfaces are defined that allow the adaptation of the software by the means of traits classes. The described project is part of the CGAL libarary version 2.3.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Seel, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Implementation of planar Nef polyhedra : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D25-D %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-003 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2001 %P 345 p. %X A planar Nef polyhedron is any set that can be obtained from the open half-space by a finite number of set complement and set intersection operations. The set of Nef polyhedra is closed under the Boolean set operations. We describe a date structure that realizes two-dimensional Nef polyhedra and offers a large set of binary and unary set operations. The underlying set operations are realized by an efficient and complete algorithm for the overlay of two Nef polyhedra. The algorithm is efficient in the sense that its running time is bounded by the size of the inputs plus the size of the output times a logarithmic factor. The algorithm is complete in the sense that it can handle all inputs and requires no general position assumption. The seecond part of the algorithmic interface considers point location and ray shooting in planar subdivisions. The implementation follows the generic programming paradigm in C++ and CGAL. Several concept interfaces are defined that allow the adaptation of the software by the means of traits classes. The described project is part of the CGAL libarary version 2.3. %B Research Report / Max-Planck-Institut f&#252;r Informatik
2000
[88]
E. Althaus, O. Kohlbacher, H.-P. Lenhof, and P. Müller, “A branch and cut algorithm for the optimal solution of the side-chain placement problem,” MPI-I-2000-1-001, 2000.
Abstract
Rigid--body docking approaches are not sufficient to predict the structure of a protein complex from the unbound (native) structures of the two proteins. Accounting for side chain flexibility is an important step towards fully flexible protein docking. This work describes an approach that allows conformational flexibility for the side chains while keeping the protein backbone rigid. Starting from candidates created by a rigid--docking algorithm, we demangle the side chains of the docking site, thus creating reasonable approximations of the true complex structure. These structures are ranked with respect to the binding free energy. We present two new techniques for side chain demangling. Both approaches are based on a discrete representation of the side chain conformational space by the use of a rotamer library. This leads to a combinatorial optimization problem. For the solution of this problem we propose a fast heuristic approach and an exact, albeit slower, method that uses branch--\&--cut techniques. As a test set we use the unbound structures of three proteases and the corresponding protein inhibitors. For each of the examples, the highest--ranking conformation produced was a good approximation of the true complex structure.
Export
BibTeX
@techreport{AlthausKohlbacherLenhofMuller2000, TITLE = {A branch and cut algorithm for the optimal solution of the side-chain placement problem}, AUTHOR = {Althaus, Ernst and Kohlbacher, Oliver and Lenhof, Hans-Peter and M{\"u}ller, Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-2000-1-001}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {Rigid--body docking approaches are not sufficient to predict the structure of a protein complex from the unbound (native) structures of the two proteins. Accounting for side chain flexibility is an important step towards fully flexible protein docking. This work describes an approach that allows conformational flexibility for the side chains while keeping the protein backbone rigid. Starting from candidates created by a rigid--docking algorithm, we demangle the side chains of the docking site, thus creating reasonable approximations of the true complex structure. These structures are ranked with respect to the binding free energy. We present two new techniques for side chain demangling. Both approaches are based on a discrete representation of the side chain conformational space by the use of a rotamer library. This leads to a combinatorial optimization problem. For the solution of this problem we propose a fast heuristic approach and an exact, albeit slower, method that uses branch--\&--cut techniques. As a test set we use the unbound structures of three proteases and the corresponding protein inhibitors. For each of the examples, the highest--ranking conformation produced was a good approximation of the true complex structure.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Althaus, Ernst %A Kohlbacher, Oliver %A Lenhof, Hans-Peter %A M&#252;ller, Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A branch and cut algorithm for the optimal solution of the side-chain placement problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A866-0 %D 2000 %P 26 p. %X Rigid--body docking approaches are not sufficient to predict the structure of a protein complex from the unbound (native) structures of the two proteins. Accounting for side chain flexibility is an important step towards fully flexible protein docking. This work describes an approach that allows conformational flexibility for the side chains while keeping the protein backbone rigid. Starting from candidates created by a rigid--docking algorithm, we demangle the side chains of the docking site, thus creating reasonable approximations of the true complex structure. These structures are ranked with respect to the binding free energy. We present two new techniques for side chain demangling. Both approaches are based on a discrete representation of the side chain conformational space by the use of a rotamer library. This leads to a combinatorial optimization problem. For the solution of this problem we propose a fast heuristic approach and an exact, albeit slower, method that uses branch--\&--cut techniques. As a test set we use the unbound structures of three proteases and the corresponding protein inhibitors. For each of the examples, the highest--ranking conformation produced was a good approximation of the true complex structure. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[89]
R. Beier and J. Sibeyn, “A powerful heuristic for telephone gossiping,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2000-1-002, 2000.
Abstract
A refined heuristic for computing schedules for gossiping in the telephone model is presented. The heuristic is fast: for a network with n nodes and m edges, requiring R rounds for gossiping, the running time is O(R n log(n) m) for all tested classes of graphs. This moderate time consumption allows to compute gossiping schedules for networks with more than 10,000 PUs and 100,000 connections. The heuristic is good: in practice the computed schedules never exceed the optimum by more than a few rounds. The heuristic is versatile: it can also be used for broadcasting and more general information dispersion patterns. It can handle both the unit-cost and the linear-cost model. Actually, the heuristic is so good, that for CCC, shuffle-exchange, butterfly de Bruijn, star and pancake networks the constructed gossiping schedules are better than the best theoretically derived ones. For example, for gossiping on a shuffle-exchange network with 2^{13} PUs, the former upper bound was 49 rounds, while our heuristic finds a schedule requiring 31 rounds. Also for broadcasting the heuristic improves on many formerly known results. A second heuristic, works even better for CCC, butterfly, star and pancake networks. For example, with this heuristic we found that gossiping on a pancake network with 7! PUs can be performed in 15 rounds, 2 fewer than achieved by the best theoretical construction. This second heuristic is less versatile than the first, but by refined search techniques it can tackle even larger problems, the main limitation being the storage capacity. Another advantage is that the constructed schedules can be represented concisely.
Export
BibTeX
@techreport{MPI-I-2000-1-002, TITLE = {A powerful heuristic for telephone gossiping}, AUTHOR = {Beier, Ren{\'e} and Sibeyn, Jop}, LANGUAGE = {eng}, NUMBER = {MPI-I-2000-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {A refined heuristic for computing schedules for gossiping in the telephone model is presented. The heuristic is fast: for a network with n nodes and m edges, requiring R rounds for gossiping, the running time is O(R n log(n) m) for all tested classes of graphs. This moderate time consumption allows to compute gossiping schedules for networks with more than 10,000 PUs and 100,000 connections. The heuristic is good: in practice the computed schedules never exceed the optimum by more than a few rounds. The heuristic is versatile: it can also be used for broadcasting and more general information dispersion patterns. It can handle both the unit-cost and the linear-cost model. Actually, the heuristic is so good, that for CCC, shuffle-exchange, butterfly de Bruijn, star and pancake networks the constructed gossiping schedules are better than the best theoretically derived ones. For example, for gossiping on a shuffle-exchange network with 2^{13} PUs, the former upper bound was 49 rounds, while our heuristic finds a schedule requiring 31 rounds. Also for broadcasting the heuristic improves on many formerly known results. A second heuristic, works even better for CCC, butterfly, star and pancake networks. For example, with this heuristic we found that gossiping on a pancake network with 7! PUs can be performed in 15 rounds, 2 fewer than achieved by the best theoretical construction. This second heuristic is less versatile than the first, but by refined search techniques it can tackle even larger problems, the main limitation being the storage capacity. Another advantage is that the constructed schedules can be represented concisely.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Beier, Ren&#233; %A Sibeyn, Jop %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A powerful heuristic for telephone gossiping : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F2E-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2000 %P 23 p. %X A refined heuristic for computing schedules for gossiping in the telephone model is presented. The heuristic is fast: for a network with n nodes and m edges, requiring R rounds for gossiping, the running time is O(R n log(n) m) for all tested classes of graphs. This moderate time consumption allows to compute gossiping schedules for networks with more than 10,000 PUs and 100,000 connections. The heuristic is good: in practice the computed schedules never exceed the optimum by more than a few rounds. The heuristic is versatile: it can also be used for broadcasting and more general information dispersion patterns. It can handle both the unit-cost and the linear-cost model. Actually, the heuristic is so good, that for CCC, shuffle-exchange, butterfly de Bruijn, star and pancake networks the constructed gossiping schedules are better than the best theoretically derived ones. For example, for gossiping on a shuffle-exchange network with 2^{13} PUs, the former upper bound was 49 rounds, while our heuristic finds a schedule requiring 31 rounds. Also for broadcasting the heuristic improves on many formerly known results. A second heuristic, works even better for CCC, butterfly, star and pancake networks. For example, with this heuristic we found that gossiping on a pancake network with 7! PUs can be performed in 15 rounds, 2 fewer than achieved by the best theoretical construction. This second heuristic is less versatile than the first, but by refined search techniques it can tackle even larger problems, the main limitation being the storage capacity. Another advantage is that the constructed schedules can be represented concisely. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[90]
P. Fatourou, “Low-contention depth-first scheduling of parallel computations with synchronization variables,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2000-1-003, 2000.
Abstract
In this paper, we present a randomized, online, space-efficient algorithm for the general class of programs with synchronization variables (such programs are produced by parallel programming languages, like, e.g., Cool, ID, Sisal, Mul-T, OLDEN and Jade). The algorithm achieves good locality and low scheduling overheads for this general class of computations, by combining work-stealing and depth-first scheduling. More specifically, given a computation with work $T_1$, depth $T_\infty$ and $\sigma$ synchronizations that its execution requires space $S_1$ on a single-processor computer, our algorithm achieves expected space complexity at most $S_1 + O(PT_\infty \log (PT_\infty))$ and runs in an expected number of $O(T_1/P + \sigma \log (PT_\infty)/P + T_\infty \log (PT_\infty))$ timesteps on a shared-memory, parallel machine with $P$ processors. Moreover, for any $\varepsilon > 0$, the space complexity of our algorithm is at most $S_1 + O(P(T_\infty + \ln (1/\varepsilon)) \log (P(T_\infty + \ln(P(T_\infty + \ln (1/\varepsilon))/\varepsilon))))$ with probability at least $1-\varepsilon$. Thus, even for values of $\varepsilon$ as small as $e^{-T_\infty}$, the space complexity of our algorithm is at most $S_1 + O(PT_\infty \log(PT_\infty))$, with probability at least $1-e^{-T_\infty}$. The algorithm achieves good locality and low scheduling overheads by automatically increasing the granularity of the work scheduled on each processor. Our results combine and extend previous algorithms and analysis techniques (published by Blelloch et. al [6] and by Narlikar [26]). Our algorithm not only exhibits the same good space complexity for the general class of programs with synchronization variables as its deterministic analog presented in [6], but it also achieves good locality and low scheduling overhead as the algorithm presented in [26], which however performs well only for the more restricted class of nested parallel computations.
Export
BibTeX
@techreport{, TITLE = {Low-contention depth-first scheduling of parallel computations with synchronization variables}, AUTHOR = {Fatourou, Panagiota}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2000-1-003}, NUMBER = {MPI-I-2000-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {In this paper, we present a randomized, online, space-efficient algorithm for the general class of programs with synchronization variables (such programs are produced by parallel programming languages, like, e.g., Cool, ID, Sisal, Mul-T, OLDEN and Jade). The algorithm achieves good locality and low scheduling overheads for this general class of computations, by combining work-stealing and depth-first scheduling. More specifically, given a computation with work $T_1$, depth $T_\infty$ and $\sigma$ synchronizations that its execution requires space $S_1$ on a single-processor computer, our algorithm achieves expected space complexity at most $S_1 + O(PT_\infty \log (PT_\infty))$ and runs in an expected number of $O(T_1/P + \sigma \log (PT_\infty)/P + T_\infty \log (PT_\infty))$ timesteps on a shared-memory, parallel machine with $P$ processors. Moreover, for any $\varepsilon > 0$, the space complexity of our algorithm is at most $S_1 + O(P(T_\infty + \ln (1/\varepsilon)) \log (P(T_\infty + \ln(P(T_\infty + \ln (1/\varepsilon))/\varepsilon))))$ with probability at least $1-\varepsilon$. Thus, even for values of $\varepsilon$ as small as $e^{-T_\infty}$, the space complexity of our algorithm is at most $S_1 + O(PT_\infty \log(PT_\infty))$, with probability at least $1-e^{-T_\infty}$. The algorithm achieves good locality and low scheduling overheads by automatically increasing the granularity of the work scheduled on each processor. Our results combine and extend previous algorithms and analysis techniques (published by Blelloch et. al [6] and by Narlikar [26]). Our algorithm not only exhibits the same good space complexity for the general class of programs with synchronization variables as its deterministic analog presented in [6], but it also achieves good locality and low scheduling overhead as the algorithm presented in [26], which however performs well only for the more restricted class of nested parallel computations.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Fatourou, Panagiota %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Low-contention depth-first scheduling of parallel computations with synchronization variables : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F2B-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2000-1-003 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2000 %P 56 p. %X In this paper, we present a randomized, online, space-efficient algorithm for the general class of programs with synchronization variables (such programs are produced by parallel programming languages, like, e.g., Cool, ID, Sisal, Mul-T, OLDEN and Jade). The algorithm achieves good locality and low scheduling overheads for this general class of computations, by combining work-stealing and depth-first scheduling. More specifically, given a computation with work $T_1$, depth $T_\infty$ and $\sigma$ synchronizations that its execution requires space $S_1$ on a single-processor computer, our algorithm achieves expected space complexity at most $S_1 + O(PT_\infty \log (PT_\infty))$ and runs in an expected number of $O(T_1/P + \sigma \log (PT_\infty)/P + T_\infty \log (PT_\infty))$ timesteps on a shared-memory, parallel machine with $P$ processors. Moreover, for any $\varepsilon > 0$, the space complexity of our algorithm is at most $S_1 + O(P(T_\infty + \ln (1/\varepsilon)) \log (P(T_\infty + \ln(P(T_\infty + \ln (1/\varepsilon))/\varepsilon))))$ with probability at least $1-\varepsilon$. Thus, even for values of $\varepsilon$ as small as $e^{-T_\infty}$, the space complexity of our algorithm is at most $S_1 + O(PT_\infty \log(PT_\infty))$, with probability at least $1-e^{-T_\infty}$. The algorithm achieves good locality and low scheduling overheads by automatically increasing the granularity of the work scheduled on each processor. Our results combine and extend previous algorithms and analysis techniques (published by Blelloch et. al [6] and by Narlikar [26]). Our algorithm not only exhibits the same good space complexity for the general class of programs with synchronization variables as its deterministic analog presented in [6], but it also achieves good locality and low scheduling overhead as the algorithm presented in [26], which however performs well only for the more restricted class of nested parallel computations. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[91]
K. Mehlhorn and S. Schirra, “A Generalized and improved constructive separation bound for real algebraic expressions,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2000-1-004, 2000.
Abstract
We prove a separation bound for a large class of algebraic expressions specified by expression dags. The bound applies to expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, $k$-th root operations for integral $k$, and taking roots of polynomials whose coefficients are given by the values of subexpressions. The (logarithm of the) new bound depends linearly on the algebraic degree of the expression. Previous bounds applied to a smaller class of expressions and did not guarantee linear dependency. \ignore{In~\cite{BFMS} the dependency was quadratic. and in the Li-Yap bound~\cite{LY} the dependency is usually linear, but may be even worse than quadratic.}
Export
BibTeX
@techreport{MPI-I-2000-1-004, TITLE = {A Generalized and improved constructive separation bound for real algebraic expressions}, AUTHOR = {Mehlhorn, Kurt and Schirra, Stefan}, LANGUAGE = {eng}, NUMBER = {MPI-I-2000-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {We prove a separation bound for a large class of algebraic expressions specified by expression dags. The bound applies to expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, $k$-th root operations for integral $k$, and taking roots of polynomials whose coefficients are given by the values of subexpressions. The (logarithm of the) new bound depends linearly on the algebraic degree of the expression. Previous bounds applied to a smaller class of expressions and did not guarantee linear dependency. \ignore{In~\cite{BFMS} the dependency was quadratic. and in the Li-Yap bound~\cite{LY} the dependency is usually linear, but may be even worse than quadratic.}}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Mehlhorn, Kurt %A Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Generalized and improved constructive separation bound for real algebraic expressions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D56-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2000 %P 12 p. %X We prove a separation bound for a large class of algebraic expressions specified by expression dags. The bound applies to expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, $k$-th root operations for integral $k$, and taking roots of polynomials whose coefficients are given by the values of subexpressions. The (logarithm of the) new bound depends linearly on the algebraic degree of the expression. Previous bounds applied to a smaller class of expressions and did not guarantee linear dependency. \ignore{In~\cite{BFMS} the dependency was quadratic. and in the Li-Yap bound~\cite{LY} the dependency is usually linear, but may be even worse than quadratic.} %B Research Report / Max-Planck-Institut f&#252;r Informatik
[92]
M. Seel and K. Mehlhorn, “Infimaximal frames: a technique for making lines look like segments,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-2000-1-005, 2000.
Abstract
Many geometric algorithms that are usually formulated for points and segments generalize easily to inputs also containing rays and lines. The sweep algorithm for segment intersection is a prototypical example. Implementations of such algorithms do, in general, not extend easily. For example, segment endpoints cause events in sweep line algorithms, but lines have no endpoints. We describe a general technique, which we call infimaximal frames, for extending implementations to inputs also containing rays and lines. The technique can also be used to extend implementations of planar subdivisions to subdivisions with many unbounded faces. We have used the technique successfully in generalizing a sweep algorithm designed for segments to rays and lines and also in an implementation of planar Nef polyhedra. Our implementation is based on concepts of generic programming in C++ and the geometric data types provided by the C++ Computational Geometry Algorithms Library (CGAL).
Export
BibTeX
@techreport{MPI-I-2000-1-005, TITLE = {Infimaximal frames: a technique for making lines look like segments}, AUTHOR = {Seel, Michael and Mehlhorn, Kurt}, LANGUAGE = {eng}, NUMBER = {MPI-I-2000-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2000}, DATE = {2000}, ABSTRACT = {Many geometric algorithms that are usually formulated for points and segments generalize easily to inputs also containing rays and lines. The sweep algorithm for segment intersection is a prototypical example. Implementations of such algorithms do, in general, not extend easily. For example, segment endpoints cause events in sweep line algorithms, but lines have no endpoints. We describe a general technique, which we call infimaximal frames, for extending implementations to inputs also containing rays and lines. The technique can also be used to extend implementations of planar subdivisions to subdivisions with many unbounded faces. We have used the technique successfully in generalizing a sweep algorithm designed for segments to rays and lines and also in an implementation of planar Nef polyhedra. Our implementation is based on concepts of generic programming in C++ and the geometric data types provided by the C++ Computational Geometry Algorithms Library (CGAL).}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Seel, Michael %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Infimaximal frames: a technique for making lines look like segments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6D53-3 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 2000 %P 16 p. %X Many geometric algorithms that are usually formulated for points and segments generalize easily to inputs also containing rays and lines. The sweep algorithm for segment intersection is a prototypical example. Implementations of such algorithms do, in general, not extend easily. For example, segment endpoints cause events in sweep line algorithms, but lines have no endpoints. We describe a general technique, which we call infimaximal frames, for extending implementations to inputs also containing rays and lines. The technique can also be used to extend implementations of planar subdivisions to subdivisions with many unbounded faces. We have used the technique successfully in generalizing a sweep algorithm designed for segments to rays and lines and also in an implementation of planar Nef polyhedra. Our implementation is based on concepts of generic programming in C++ and the geometric data types provided by the C++ Computational Geometry Algorithms Library (CGAL). %B Research Report / Max-Planck-Institut f&#252;r Informatik
1999
[93]
N. Boghossian, O. Kohlbacher, and H.-P. Lenhof, “BALL: Biochemical Algorithms Library,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1999-1-002, 1999.
Abstract
In the next century, virtual laboratories will play a key role in biotechnology. Computer experiments will not only replace time-consuming and expensive real-world experiments, but they will also provide insights that cannot be obtained using ``wet'' experiments. The field that deals with the modeling of atoms, molecules, and their reactions is called Molecular Modeling. The advent of Life Sciences gave rise to numerous new developments in this area. However, the implementation of new simulation tools is extremely time-consuming. This is mainly due to the large amount of supporting code ({\eg} for data import/export, visualization, and so on) that is required in addition to the code necessary to implement the new idea. The only way to reduce the development time is to reuse reliable code, preferably using object-oriented approaches. We have designed and implemented {\Ball}, the first object-oriented application framework for rapid prototyping in Molecular Modeling. By the use of the composite design pattern and polymorphism we were able to model the multitude of complex biochemical concepts in a well-structured and comprehensible class hierarchy, the {\Ball} kernel classes. The isomorphism between the biochemical structures and the kernel classes leads to an intuitive interface. Since {\Ball} was designed for rapid software prototyping, ease of use and flexibility were our principal design goals. Besides the kernel classes, {\Ball} provides fundamental components for import/export of data in various file formats, Molecular Mechanics simulations, three-dimensional visualization, and more complex ones like a numerical solver for the Poisson-Boltzmann equation. The usefulness of {\Ball} was shown by the implementation of an algorithm that checks proteins for similarity. Instead of the five months that an earlier implementation took, we were able to implement it within a day using {\Ball}.
Export
BibTeX
@techreport{BoghossianKohlbacherLenhof1999, TITLE = {{BALL}: Biochemical Algorithms Library}, AUTHOR = {Boghossian, Nicolas and Kohlbacher, Oliver and Lenhof, Hans-Peter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-002}, NUMBER = {MPI-I-1999-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1999}, DATE = {1999}, ABSTRACT = {In the next century, virtual laboratories will play a key role in biotechnology. Computer experiments will not only replace time-consuming and expensive real-world experiments, but they will also provide insights that cannot be obtained using ``wet'' experiments. The field that deals with the modeling of atoms, molecules, and their reactions is called Molecular Modeling. The advent of Life Sciences gave rise to numerous new developments in this area. However, the implementation of new simulation tools is extremely time-consuming. This is mainly due to the large amount of supporting code ({\eg} for data import/export, visualization, and so on) that is required in addition to the code necessary to implement the new idea. The only way to reduce the development time is to reuse reliable code, preferably using object-oriented approaches. We have designed and implemented {\Ball}, the first object-oriented application framework for rapid prototyping in Molecular Modeling. By the use of the composite design pattern and polymorphism we were able to model the multitude of complex biochemical concepts in a well-structured and comprehensible class hierarchy, the {\Ball} kernel classes. The isomorphism between the biochemical structures and the kernel classes leads to an intuitive interface. Since {\Ball} was designed for rapid software prototyping, ease of use and flexibility were our principal design goals. Besides the kernel classes, {\Ball} provides fundamental components for import/export of data in various file formats, Molecular Mechanics simulations, three-dimensional visualization, and more complex ones like a numerical solver for the Poisson-Boltzmann equation. The usefulness of {\Ball} was shown by the implementation of an algorithm that checks proteins for similarity. Instead of the five months that an earlier implementation took, we were able to implement it within a day using {\Ball}.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Boghossian, Nicolas %A Kohlbacher, Oliver %A Lenhof, Hans-Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T BALL: Biochemical Algorithms Library : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F98-8 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-002 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1999 %P 20 p. %X In the next century, virtual laboratories will play a key role in biotechnology. Computer experiments will not only replace time-consuming and expensive real-world experiments, but they will also provide insights that cannot be obtained using ``wet'' experiments. The field that deals with the modeling of atoms, molecules, and their reactions is called Molecular Modeling. The advent of Life Sciences gave rise to numerous new developments in this area. However, the implementation of new simulation tools is extremely time-consuming. This is mainly due to the large amount of supporting code ({\eg} for data import/export, visualization, and so on) that is required in addition to the code necessary to implement the new idea. The only way to reduce the development time is to reuse reliable code, preferably using object-oriented approaches. We have designed and implemented {\Ball}, the first object-oriented application framework for rapid prototyping in Molecular Modeling. By the use of the composite design pattern and polymorphism we were able to model the multitude of complex biochemical concepts in a well-structured and comprehensible class hierarchy, the {\Ball} kernel classes. The isomorphism between the biochemical structures and the kernel classes leads to an intuitive interface. Since {\Ball} was designed for rapid software prototyping, ease of use and flexibility were our principal design goals. Besides the kernel classes, {\Ball} provides fundamental components for import/export of data in various file formats, Molecular Mechanics simulations, three-dimensional visualization, and more complex ones like a numerical solver for the Poisson-Boltzmann equation. The usefulness of {\Ball} was shown by the implementation of an algorithm that checks proteins for similarity. Instead of the five months that an earlier implementation took, we were able to implement it within a day using {\Ball}. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[94]
C. Burnikel, K. Mehlhorn, and M. Seel, “A simple way to recognize a correct Voronoi diagram of line segments,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1999-1-007, 1999.
Abstract
Writing a program for computing the Voronoi diagram of line segments is a complex task. Not only there is an abundance of geometric cases that have to be considered, but the problem is also numerically difficult. Therefore it is very easy to make subtle programming errors. In this paper we present a procedure that for a given set of sites $S$ and a candidate graph $G$ rigorously checks that $G$ is the correct Voronoi diagram of line segments for $S$. Our procedure is particularly efficient and simple to implement.
Export
BibTeX
@techreport{, TITLE = {A simple way to recognize a correct Voronoi diagram of line segments}, AUTHOR = {Burnikel, Christoph and Mehlhorn, Kurt and Seel, Michael}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-007}, NUMBER = {MPI-I-1999-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1999}, DATE = {1999}, ABSTRACT = {Writing a program for computing the Voronoi diagram of line segments is a complex task. Not only there is an abundance of geometric cases that have to be considered, but the problem is also numerically difficult. Therefore it is very easy to make subtle programming errors. In this paper we present a procedure that for a given set of sites $S$ and a candidate graph $G$ rigorously checks that $G$ is the correct Voronoi diagram of line segments for $S$. Our procedure is particularly efficient and simple to implement.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Burnikel, Christoph %A Mehlhorn, Kurt %A Seel, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A simple way to recognize a correct Voronoi diagram of line segments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F7E-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-007 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1999 %P 11 p. %X Writing a program for computing the Voronoi diagram of line segments is a complex task. Not only there is an abundance of geometric cases that have to be considered, but the problem is also numerically difficult. Therefore it is very easy to make subtle programming errors. In this paper we present a procedure that for a given set of sites $S$ and a candidate graph $G$ rigorously checks that $G$ is the correct Voronoi diagram of line segments for $S$. Our procedure is particularly efficient and simple to implement. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[95]
A. Crauser and P. Ferragina, “A theoretical and experimental study on the construction of suffix arrays in external memory,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1999-1-001, 1999.
Abstract
The construction of full-text indexes on very large text collections is nowadays a hot problem. The suffix array [Manber-Myers,~1993] is one of the most attractive full-text indexing data structures due to its simplicity, space efficiency and powerful/fast search operations supported. In this paper we analyze, both theoretically and experimentally, the I/O-complexity and the working space of six algorithms for constructing large suffix arrays. Three of them are the state-of-the-art, the other three algorithms are our new proposals. We perform a set of experiments based on three different data sets (English texts, Amino-acid sequences and random texts) and give a precise hierarchy of these algorithms according to their working-space vs. construction-time tradeoff. Given the current trends in model design~\cite{Farach-et-al,Vitter} and disk technology~\cite{dahlin,Ruemmler-Wilkes}, we will pose particular attention to differentiate between ``random'' and ``contiguous'' disk accesses, in order to reasonably explain some practical I/O-phenomena which are related to the experimental behavior of these algorithms and that would be otherwise meaningless in the light of other simpler external-memory models. At the best of our knowledge, this is the first study which provides a wide spectrum of possible approaches to the construction of suffix arrays in external memory, and thus it should be helpful to anyone who is interested in building full-text indexes on very large text collections. Finally, we conclude our paper by addressing two other issues. The former concerns with the problem of building word-indexes; we show that our results can be successfully applied to this case too, without any loss in efficiency and without compromising the simplicity of programming so to achieve a uniform, simple and efficient approach to both the two indexing models. The latter issue is related to the intriguing and apparently counterintuitive ``contradiction'' between the effective practical performance of the well-known BaezaYates-Gonnet-Snider's algorithm~\cite{book-info}, verified in our experiments, and its unappealing (i.e., cubic) worst-case behavior. We devise a new external-memory algorithm that follows the basic philosophy underlying that algorithm but in a significantly different manner, thus resulting in a novel approach which combines good worst-case bounds with efficient practical performance.
Export
BibTeX
@techreport{CrauserFerragina99, TITLE = {A theoretical and experimental study on the construction of suffix arrays in external memory}, AUTHOR = {Crauser, Andreas and Ferragina, Paolo}, LANGUAGE = {eng}, NUMBER = {MPI-I-1999-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1999}, DATE = {1999}, ABSTRACT = {The construction of full-text indexes on very large text collections is nowadays a hot problem. The suffix array [Manber-Myers,~1993] is one of the most attractive full-text indexing data structures due to its simplicity, space efficiency and powerful/fast search operations supported. In this paper we analyze, both theoretically and experimentally, the I/O-complexity and the working space of six algorithms for constructing large suffix arrays. Three of them are the state-of-the-art, the other three algorithms are our new proposals. We perform a set of experiments based on three different data sets (English texts, Amino-acid sequences and random texts) and give a precise hierarchy of these algorithms according to their working-space vs. construction-time tradeoff. Given the current trends in model design~\cite{Farach-et-al,Vitter} and disk technology~\cite{dahlin,Ruemmler-Wilkes}, we will pose particular attention to differentiate between ``random'' and ``contiguous'' disk accesses, in order to reasonably explain some practical I/O-phenomena which are related to the experimental behavior of these algorithms and that would be otherwise meaningless in the light of other simpler external-memory models. At the best of our knowledge, this is the first study which provides a wide spectrum of possible approaches to the construction of suffix arrays in external memory, and thus it should be helpful to anyone who is interested in building full-text indexes on very large text collections. Finally, we conclude our paper by addressing two other issues. The former concerns with the problem of building word-indexes; we show that our results can be successfully applied to this case too, without any loss in efficiency and without compromising the simplicity of programming so to achieve a uniform, simple and efficient approach to both the two indexing models. The latter issue is related to the intriguing and apparently counterintuitive ``contradiction'' between the effective practical performance of the well-known BaezaYates-Gonnet-Snider's algorithm~\cite{book-info}, verified in our experiments, and its unappealing (i.e., cubic) worst-case behavior. We devise a new external-memory algorithm that follows the basic philosophy underlying that algorithm but in a significantly different manner, thus resulting in a novel approach which combines good worst-case bounds with efficient practical performance.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Crauser, Andreas %A Ferragina, Paolo %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A theoretical and experimental study on the construction of suffix arrays in external memory : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F9B-2 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1999 %P 40 p. %X The construction of full-text indexes on very large text collections is nowadays a hot problem. The suffix array [Manber-Myers,~1993] is one of the most attractive full-text indexing data structures due to its simplicity, space efficiency and powerful/fast search operations supported. In this paper we analyze, both theoretically and experimentally, the I/O-complexity and the working space of six algorithms for constructing large suffix arrays. Three of them are the state-of-the-art, the other three algorithms are our new proposals. We perform a set of experiments based on three different data sets (English texts, Amino-acid sequences and random texts) and give a precise hierarchy of these algorithms according to their working-space vs. construction-time tradeoff. Given the current trends in model design~\cite{Farach-et-al,Vitter} and disk technology~\cite{dahlin,Ruemmler-Wilkes}, we will pose particular attention to differentiate between ``random'' and ``contiguous'' disk accesses, in order to reasonably explain some practical I/O-phenomena which are related to the experimental behavior of these algorithms and that would be otherwise meaningless in the light of other simpler external-memory models. At the best of our knowledge, this is the first study which provides a wide spectrum of possible approaches to the construction of suffix arrays in external memory, and thus it should be helpful to anyone who is interested in building full-text indexes on very large text collections. Finally, we conclude our paper by addressing two other issues. The former concerns with the problem of building word-indexes; we show that our results can be successfully applied to this case too, without any loss in efficiency and without compromising the simplicity of programming so to achieve a uniform, simple and efficient approach to both the two indexing models. The latter issue is related to the intriguing and apparently counterintuitive ``contradiction'' between the effective practical performance of the well-known BaezaYates-Gonnet-Snider's algorithm~\cite{book-info}, verified in our experiments, and its unappealing (i.e., cubic) worst-case behavior. We devise a new external-memory algorithm that follows the basic philosophy underlying that algorithm but in a significantly different manner, thus resulting in a novel approach which combines good worst-case bounds with efficient practical performance. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[96]
M. Nissen, “Integration of graph iterators into LEDA,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1999-1-006, 1999.
Abstract
This paper explains some implementation details of graph iterators and data accessors in LEDA. It shows how to create new iterators for new graph implementations such that old algorithms can be re--used with new graph implementations as long as they are based on graph iterators and data accessors.
Export
BibTeX
@techreport{MPI-I-1999-1-006, TITLE = {Integration of graph iterators into {LEDA}}, AUTHOR = {Nissen, Marco}, LANGUAGE = {eng}, NUMBER = {MPI-I-1999-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1999}, DATE = {1999}, ABSTRACT = {This paper explains some implementation details of graph iterators and data accessors in LEDA. It shows how to create new iterators for new graph implementations such that old algorithms can be re--used with new graph implementations as long as they are based on graph iterators and data accessors.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Nissen, Marco %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Integration of graph iterators into LEDA : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F85-1 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1999 %P 39 p. %X This paper explains some implementation details of graph iterators and data accessors in LEDA. It shows how to create new iterators for new graph implementations such that old algorithms can be re--used with new graph implementations as long as they are based on graph iterators and data accessors. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[97]
M. Nissen and K. Weihe, “How generic language extensions enable open-world’' design in Java,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1999-1-004, 1999.
Abstract
By \emph{open--world design} we mean that collaborating classes are so loosely coupled that changes in one class do not propagate to the other classes, and single classes can be isolated and integrated in other contexts. Of course, this is what maintainability and reusability is all about. In the paper, we will demonstrate that in Java even an open--world design of mere attribute access can only be achieved if static safety is sacrificed, and that this conflict is unresolvable \emph{even if the attribute type is fixed}. With generic language extensions such as GJ, which is a generic extension of Java, it is possible to combine static type safety and open--world design. As a consequence, genericity should be viewed as a first--class design feature, because generic language features are preferably applied in many situations in which object--orientedness seems appropriate. We chose Java as the base of the discussion because Java is commonly known and several advanced features of Java aim at a loose coupling of classes. In particular, the paper is intended to make a strong point in favor of generic extensions of Java.
Export
BibTeX
@techreport{MehlhornSchirra, TITLE = {How generic language extensions enable ''open-world'' design in Java}, AUTHOR = {Nissen, Marco and Weihe, Karsten}, LANGUAGE = {eng}, NUMBER = {MPI-I-1999-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1999}, DATE = {1999}, ABSTRACT = {By \emph{open--world design} we mean that collaborating classes are so loosely coupled that changes in one class do not propagate to the other classes, and single classes can be isolated and integrated in other contexts. Of course, this is what maintainability and reusability is all about. In the paper, we will demonstrate that in Java even an open--world design of mere attribute access can only be achieved if static safety is sacrificed, and that this conflict is unresolvable \emph{even if the attribute type is fixed}. With generic language extensions such as GJ, which is a generic extension of Java, it is possible to combine static type safety and open--world design. As a consequence, genericity should be viewed as a first--class design feature, because generic language features are preferably applied in many situations in which object--orientedness seems appropriate. We chose Java as the base of the discussion because Java is commonly known and several advanced features of Java aim at a loose coupling of classes. In particular, the paper is intended to make a strong point in favor of generic extensions of Java.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Nissen, Marco %A Weihe, Karsten %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T How generic language extensions enable ''open-world'' design in Java : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F8F-D %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1999 %P 40 p. %X By \emph{open--world design} we mean that collaborating classes are so loosely coupled that changes in one class do not propagate to the other classes, and single classes can be isolated and integrated in other contexts. Of course, this is what maintainability and reusability is all about. In the paper, we will demonstrate that in Java even an open--world design of mere attribute access can only be achieved if static safety is sacrificed, and that this conflict is unresolvable \emph{even if the attribute type is fixed}. With generic language extensions such as GJ, which is a generic extension of Java, it is possible to combine static type safety and open--world design. As a consequence, genericity should be viewed as a first--class design feature, because generic language features are preferably applied in many situations in which object--orientedness seems appropriate. We chose Java as the base of the discussion because Java is commonly known and several advanced features of Java aim at a loose coupling of classes. In particular, the paper is intended to make a strong point in favor of generic extensions of Java. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[98]
P. Sanders, S. Egner, and J. Korst, “Fast concurrent access to parallel disks,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1999-1-003, 1999.
Abstract
High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of single-disk external memory algorithms to multiple disks. We show how this problem can be solved efficiently by using randomization and redundancy. A buffer of O(D) blocks suffices to support efficient writing of arbitrary blocks if blocks are distributed uniformly at random to the disks (e.g., by hashing). If two randomly allocated copies of each block exist, N arbitrary blocks can be read within ceiling(N/D)+1 I/O steps with high probability. The redundancy can be further reduced from 2 to 1+1/r for any integer r. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's "single-disk multi-head" model that allows access to D arbitrary blocks in each I/O step. This powerful model can be emulated on the physically more realistic independent disk model with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multi-head model first. The emulation result can then be applied directly or further refinements can be added.
Export
BibTeX
@techreport{SandersEgnerKorst99, TITLE = {Fast concurrent access to parallel disks}, AUTHOR = {Sanders, Peter and Egner, Sebastian and Korst, Jan}, LANGUAGE = {eng}, NUMBER = {MPI-I-1999-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1999}, DATE = {1999}, ABSTRACT = {High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of single-disk external memory algorithms to multiple disks. We show how this problem can be solved efficiently by using randomization and redundancy. A buffer of O(D) blocks suffices to support efficient writing of arbitrary blocks if blocks are distributed uniformly at random to the disks (e.g., by hashing). If two randomly allocated copies of each block exist, N arbitrary blocks can be read within ceiling(N/D)+1 I/O steps with high probability. The redundancy can be further reduced from 2 to 1+1/r for any integer r. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's "single-disk multi-head" model that allows access to D arbitrary blocks in each I/O step. This powerful model can be emulated on the physically more realistic independent disk model with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multi-head model first. The emulation result can then be applied directly or further refinements can be added.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sanders, Peter %A Egner, Sebastian %A Korst, Jan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Fast concurrent access to parallel disks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F94-0 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1999 %P 30 p. %X High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of single-disk external memory algorithms to multiple disks. We show how this problem can be solved efficiently by using randomization and redundancy. A buffer of O(D) blocks suffices to support efficient writing of arbitrary blocks if blocks are distributed uniformly at random to the disks (e.g., by hashing). If two randomly allocated copies of each block exist, N arbitrary blocks can be read within ceiling(N/D)+1 I/O steps with high probability. The redundancy can be further reduced from 2 to 1+1/r for any integer r. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's "single-disk multi-head" model that allows access to D arbitrary blocks in each I/O step. This powerful model can be emulated on the physically more realistic independent disk model with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multi-head model first. The emulation result can then be applied directly or further refinements can be added. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[99]
J. Sibeyn, “Ultimate parallel list ranking?,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1999-1-005, 1999.
Abstract
Two improved list-ranking algorithms are presented. The ``peeling-off'' algorithm leads to an optimal PRAM algorithm, but was designed with application on a real parallel machine in mind. It is simpler than earlier algorithms, and in a range of problem sizes, where previously several algorithms where required for the best performance, now this single algorithm suffices. If the problem size is much larger than the number of available processors, then the ``sparse-ruling-sets'' algorithm is even better. In previous versions this algorithm had very restricted practical application because of the large number of communication rounds it was performing. This main weakness of this algorithm is overcome by adding two new ideas, each of which reduces the number of communication rounds by a factor of two.
Export
BibTeX
@techreport{Sibeyn1999, TITLE = {Ultimate parallel list ranking?}, AUTHOR = {Sibeyn, Jop}, LANGUAGE = {eng}, NUMBER = {MPI-I-1999-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1999}, DATE = {1999}, ABSTRACT = {Two improved list-ranking algorithms are presented. The ``peeling-off'' algorithm leads to an optimal PRAM algorithm, but was designed with application on a real parallel machine in mind. It is simpler than earlier algorithms, and in a range of problem sizes, where previously several algorithms where required for the best performance, now this single algorithm suffices. If the problem size is much larger than the number of available processors, then the ``sparse-ruling-sets'' algorithm is even better. In previous versions this algorithm had very restricted practical application because of the large number of communication rounds it was performing. This main weakness of this algorithm is overcome by adding two new ideas, each of which reduces the number of communication rounds by a factor of two.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sibeyn, Jop %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Ultimate parallel list ranking? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-6F8A-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1999 %P 20 p. %X Two improved list-ranking algorithms are presented. The ``peeling-off'' algorithm leads to an optimal PRAM algorithm, but was designed with application on a real parallel machine in mind. It is simpler than earlier algorithms, and in a range of problem sizes, where previously several algorithms where required for the best performance, now this single algorithm suffices. If the problem size is much larger than the number of available processors, then the ``sparse-ruling-sets'' algorithm is even better. In previous versions this algorithm had very restricted practical application because of the large number of communication rounds it was performing. This main weakness of this algorithm is overcome by adding two new ideas, each of which reduces the number of communication rounds by a factor of two. %B Research Report / Max-Planck-Institut f&#252;r Informatik
1998
[100]
S. Albers and G. Schmidt, “Scheduling with unexpected machine breakdowns,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-021, 1998.
Abstract
We investigate an online version of the scheduling problem $P, NC|pmtn|C_{\max}$, where a set of jobs has to be scheduled on a number of identical machines so as to minimize the makespan. The job processing times are known in advance and preemption of jobs is allowed. Machines are {\it non-continuously\/} available, i.e., they can break down and recover at arbitrary time instances {\it not known in advance}. New machines may be added as well. Thus machine availabilities change online. We first show that no online algorithm can construct optimal schedules. We also show that no online algorithm can achieve a constant competitive ratio if there may be time intervals where no machine is available. Then we present an online algorithm that constructs schedules with an optimal makespan of $C_{\max}^{OPT}$ if a {\it lookahead\/} of one is given, i.e., the algorithm always knows the next point in time when the set of available machines changes. Finally we give an online algorithm without lookahead that constructs schedules with a nearly optimal makespan of $C_{\max}^{OPT} + \epsilon$, for any $\epsilon >0$, if at any time at least one machine is available. Our results demonstrate that not knowing machine availabilities in advance is of little harm.
Export
BibTeX
@techreport{AlbersSchmidt98, TITLE = {Scheduling with unexpected machine breakdowns}, AUTHOR = {Albers, Susanne and Schmidt, G{\"u}nter}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-021}, NUMBER = {MPI-I-1998-1-021}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We investigate an online version of the scheduling problem $P, NC|pmtn|C_{\max}$, where a set of jobs has to be scheduled on a number of identical machines so as to minimize the makespan. The job processing times are known in advance and preemption of jobs is allowed. Machines are {\it non-continuously\/} available, i.e., they can break down and recover at arbitrary time instances {\it not known in advance}. New machines may be added as well. Thus machine availabilities change online. We first show that no online algorithm can construct optimal schedules. We also show that no online algorithm can achieve a constant competitive ratio if there may be time intervals where no machine is available. Then we present an online algorithm that constructs schedules with an optimal makespan of $C_{\max}^{OPT}$ if a {\it lookahead\/} of one is given, i.e., the algorithm always knows the next point in time when the set of available machines changes. Finally we give an online algorithm without lookahead that constructs schedules with a nearly optimal makespan of $C_{\max}^{OPT} + \epsilon$, for any $\epsilon >0$, if at any time at least one machine is available. Our results demonstrate that not knowing machine availabilities in advance is of little harm.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Albers, Susanne %A Schmidt, G&#252;nter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Scheduling with unexpected machine breakdowns : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B78-2 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-021 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 15 p. %X We investigate an online version of the scheduling problem $P, NC|pmtn|C_{\max}$, where a set of jobs has to be scheduled on a number of identical machines so as to minimize the makespan. The job processing times are known in advance and preemption of jobs is allowed. Machines are {\it non-continuously\/} available, i.e., they can break down and recover at arbitrary time instances {\it not known in advance}. New machines may be added as well. Thus machine availabilities change online. We first show that no online algorithm can construct optimal schedules. We also show that no online algorithm can achieve a constant competitive ratio if there may be time intervals where no machine is available. Then we present an online algorithm that constructs schedules with an optimal makespan of $C_{\max}^{OPT}$ if a {\it lookahead\/} of one is given, i.e., the algorithm always knows the next point in time when the set of available machines changes. Finally we give an online algorithm without lookahead that constructs schedules with a nearly optimal makespan of $C_{\max}^{OPT} + \epsilon$, for any $\epsilon >0$, if at any time at least one machine is available. Our results demonstrate that not knowing machine availabilities in advance is of little harm. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[101]
G. S. Brodal and M. C. Pinotti, “Comparator networks for binary heap construction,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-002, 1998.
Abstract
Comparator networks for constructing binary heaps of size $n$ are presented which have size $O(n\log\log n)$ and depth $O(\log n)$. A lower bound of $n\log\log n-O(n)$ for the size of any heap construction network is also proven, implying that the networks presented are within a constant factor of optimal. We give a tight relation between the leading constants in the size of selection networks and in the size of heap constructiion networks.
Export
BibTeX
@techreport{BrodalPinotti98, TITLE = {Comparator networks for binary heap construction}, AUTHOR = {Brodal, Gerth St{\o}lting and Pinotti, M. Cristina}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {Comparator networks for constructing binary heaps of size $n$ are presented which have size $O(n\log\log n)$ and depth $O(\log n)$. A lower bound of $n\log\log n-O(n)$ for the size of any heap construction network is also proven, implying that the networks presented are within a constant factor of optimal. We give a tight relation between the leading constants in the size of selection networks and in the size of heap constructiion networks.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Brodal, Gerth St&#248;lting %A Pinotti, M. Cristina %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Comparator networks for binary heap construction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9A0B-B %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 11 p. %X Comparator networks for constructing binary heaps of size $n$ are presented which have size $O(n\log\log n)$ and depth $O(\log n)$. A lower bound of $n\log\log n-O(n)$ for the size of any heap construction network is also proven, implying that the networks presented are within a constant factor of optimal. We give a tight relation between the leading constants in the size of selection networks and in the size of heap constructiion networks. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[102]
H. Brönniman, L. Kettner, S. Schirra, and R. Veltkamp, “Applications of the generic programming paradigm in the design of CGAL,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-030, 1998.
Abstract
We report on the use of the generic programming paradigm in the computational geometry algorithms library CGAL. The parameterization of the geometric algorithms in CGAL enhances flexibility and adaptability and opens an easy way for abolishing precision and robustness problems by exact but nevertheless efficient computation. Furthermore we discuss circulators, which are an extension of the iterator concept to circular structures. Such structures arise frequently in geometric computing.
Export
BibTeX
@techreport{BronnimanKettnerSchirraVeltkamp98, TITLE = {Applications of the generic programming paradigm in the design of {CGAL}}, AUTHOR = {Br{\"o}nniman, Herv{\`e} and Kettner, Lutz and Schirra, Stefan and Veltkamp, Remco}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-030}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We report on the use of the generic programming paradigm in the computational geometry algorithms library CGAL. The parameterization of the geometric algorithms in CGAL enhances flexibility and adaptability and opens an easy way for abolishing precision and robustness problems by exact but nevertheless efficient computation. Furthermore we discuss circulators, which are an extension of the iterator concept to circular structures. Such structures arise frequently in geometric computing.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Br&#246;nniman, Herv&#232; %A Kettner, Lutz %A Schirra, Stefan %A Veltkamp, Remco %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Applications of the generic programming paradigm in the design of CGAL : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B5D-F %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 12 p. %X We report on the use of the generic programming paradigm in the computational geometry algorithms library CGAL. The parameterization of the geometric algorithms in CGAL enhances flexibility and adaptability and opens an easy way for abolishing precision and robustness problems by exact but nevertheless efficient computation. Furthermore we discuss circulators, which are an extension of the iterator concept to circular structures. Such structures arise frequently in geometric computing. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[103]
S. Burkhardt, A. Crauser, P. Ferragina, H.-P. Lenhof, E. Rivals, and M. Vingron, “$q$-gram based database searching using a suffix array (QUASAR),” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-024, 1998.
Abstract
With the increasing amount of DNA sequence information deposited in our databases searching for similarity to a query sequence has become a basic operation in molecular biology. But even today's fast algorithms reach their limits when applied to all-versus-all comparisons of large databases. Here we present a new data base searching algorithm dubbed QUASAR (Q-gram Alignment based on Suffix ARrays) which was designed to quickly detect sequences with strong similarity to the query in a context where many searches are conducted on one database. Our algorithm applies a modification of $q$-tuple filtering implemented on top of a suffix array. Two versions were developed, one for a RAM resident suffix array and one for access to the suffix array on disk. We compared our implementation with BLAST and found that our approach is an order of magnitude faster. It is, however, restricted to the search for strongly similar DNA sequences as is typically required, e.g., in the context of clustering expressed sequence tags (ESTs).
Export
BibTeX
@techreport{BurkhardtCrauserFerraginaLenhofRivalsVingron98, TITLE = {\$q\$-gram based database searching using a suffix array ({QUASAR})}, AUTHOR = {Burkhardt, Stefan and Crauser, Andreas and Ferragina, Paolo and Lenhof, Hans-Peter and Rivals, Eric and Vingron, Martin}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-024}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {With the increasing amount of DNA sequence information deposited in our databases searching for similarity to a query sequence has become a basic operation in molecular biology. But even today's fast algorithms reach their limits when applied to all-versus-all comparisons of large databases. Here we present a new data base searching algorithm dubbed QUASAR (Q-gram Alignment based on Suffix ARrays) which was designed to quickly detect sequences with strong similarity to the query in a context where many searches are conducted on one database. Our algorithm applies a modification of $q$-tuple filtering implemented on top of a suffix array. Two versions were developed, one for a RAM resident suffix array and one for access to the suffix array on disk. We compared our implementation with BLAST and found that our approach is an order of magnitude faster. It is, however, restricted to the search for strongly similar DNA sequences as is typically required, e.g., in the context of clustering expressed sequence tags (ESTs).}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Burkhardt, Stefan %A Crauser, Andreas %A Ferragina, Paolo %A Lenhof, Hans-Peter %A Rivals, Eric %A Vingron, Martin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Gene regulation (Martin Vingron), Dept. of Computational Molecular Biology (Head: Martin Vingron), Max Planck Institute for Molecular Genetics, Max Planck Society %T $q$-gram based database searching using a suffix array (QUASAR) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B6F-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 11 p. %X With the increasing amount of DNA sequence information deposited in our databases searching for similarity to a query sequence has become a basic operation in molecular biology. But even today's fast algorithms reach their limits when applied to all-versus-all comparisons of large databases. Here we present a new data base searching algorithm dubbed QUASAR (Q-gram Alignment based on Suffix ARrays) which was designed to quickly detect sequences with strong similarity to the query in a context where many searches are conducted on one database. Our algorithm applies a modification of $q$-tuple filtering implemented on top of a suffix array. Two versions were developed, one for a RAM resident suffix array and one for access to the suffix array on disk. We compared our implementation with BLAST and found that our approach is an order of magnitude faster. It is, however, restricted to the search for strongly similar DNA sequences as is typically required, e.g., in the context of clustering expressed sequence tags (ESTs). %B Research Report / Max-Planck-Institut f&#252;r Informatik
[104]
C. Burnikel, “Rational points on circles,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-023, 1998.
Abstract
We solve the following problem. For a given rational circle $C$ passing through the rational points $p$, $q$, $r$ and a given angle $\alpha$, compute a rational point on $C$ whose angle at $C$ differs from $\alpha$ by a value of at most $\epsilon$. In addition, try to minimize the bit length of the computed point. This document contains the C++ program |rational_points_on_circle.c|. We use the literate programming tool |noweb| by Norman Ramsey.
Export
BibTeX
@techreport{Burnikel98-1-023, TITLE = {Rational points on circles}, AUTHOR = {Burnikel, Christoph}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-023}, NUMBER = {MPI-I-1998-1-023}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We solve the following problem. For a given rational circle $C$ passing through the rational points $p$, $q$, $r$ and a given angle $\alpha$, compute a rational point on $C$ whose angle at $C$ differs from $\alpha$ by a value of at most $\epsilon$. In addition, try to minimize the bit length of the computed point. This document contains the C++ program |rational_points_on_circle.c|. We use the literate programming tool |noweb| by Norman Ramsey.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Burnikel, Christoph %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Rational points on circles : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B72-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-023 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 14 p. %X We solve the following problem. For a given rational circle $C$ passing through the rational points $p$, $q$, $r$ and a given angle $\alpha$, compute a rational point on $C$ whose angle at $C$ differs from $\alpha$ by a value of at most $\epsilon$. In addition, try to minimize the bit length of the computed point. This document contains the C++ program |rational_points_on_circle.c|. We use the literate programming tool |noweb| by Norman Ramsey. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[105]
C. Burnikel, “Delaunay graphs by divide and conquer,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-027, 1998.
Abstract
This document describes the LEDA program dc_delaunay.c for computing Delaunay graphs by the divide-and-conquer method. The program can be used either with exact primitives or with non-exact primitives. It handles all cases of degeneracy and is relatively robust against the use of imprecise arithmetic. We use the literate programming tool noweb by Norman Ramsey.
Export
BibTeX
@techreport{Burnikel98-1-027, TITLE = {Delaunay graphs by divide and conquer}, AUTHOR = {Burnikel, Christoph}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-027}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {This document describes the LEDA program dc_delaunay.c for computing Delaunay graphs by the divide-and-conquer method. The program can be used either with exact primitives or with non-exact primitives. It handles all cases of degeneracy and is relatively robust against the use of imprecise arithmetic. We use the literate programming tool noweb by Norman Ramsey.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Burnikel, Christoph %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Delaunay graphs by divide and conquer : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B60-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 24 p. %X This document describes the LEDA program dc_delaunay.c for computing Delaunay graphs by the divide-and-conquer method. The program can be used either with exact primitives or with non-exact primitives. It handles all cases of degeneracy and is relatively robust against the use of imprecise arithmetic. We use the literate programming tool noweb by Norman Ramsey. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[106]
C. Burnikel and J. Ziegler, “Fast recursive division,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-022, 1998.
Abstract
We present a new recursive method for division with remainder of integers. Its running time is $2K(n)+O(n \log n)$ for division of a $2n$-digit number by an $n$-digit number where $K(n)$ is the Karatsuba multiplication time. It pays in p ractice for numbers with 860 bits or more. Then we show how we can lower this bo und to $3/2 K(n)+O(n\log n)$ if we are not interested in the remainder. As an application of division with remainder we show how to speedup modular multiplication. We also give practical results of an implementation that allow u s to say that we have the fastest integer division on a SPARC architecture compa red to all other integer packages we know of.
Export
BibTeX
@techreport{BurnikelZiegler98, TITLE = {Fast recursive division}, AUTHOR = {Burnikel, Christoph and Ziegler, Joachim}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-022}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We present a new recursive method for division with remainder of integers. Its running time is $2K(n)+O(n \log n)$ for division of a $2n$-digit number by an $n$-digit number where $K(n)$ is the Karatsuba multiplication time. It pays in p ractice for numbers with 860 bits or more. Then we show how we can lower this bo und to $3/2 K(n)+O(n\log n)$ if we are not interested in the remainder. As an application of division with remainder we show how to speedup modular multiplication. We also give practical results of an implementation that allow u s to say that we have the fastest integer division on a SPARC architecture compa red to all other integer packages we know of.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Burnikel, Christoph %A Ziegler, Joachim %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fast recursive division : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B75-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 29 p. %X We present a new recursive method for division with remainder of integers. Its running time is $2K(n)+O(n \log n)$ for division of a $2n$-digit number by an $n$-digit number where $K(n)$ is the Karatsuba multiplication time. It pays in p ractice for numbers with 860 bits or more. Then we show how we can lower this bo und to $3/2 K(n)+O(n\log n)$ if we are not interested in the remainder. As an application of division with remainder we show how to speedup modular multiplication. We also give practical results of an implementation that allow u s to say that we have the fastest integer division on a SPARC architecture compa red to all other integer packages we know of. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[107]
A. Crauser, P. Ferragina, K. Mehlhorn, U. Meyer, and E. A. Ramos, “Randomized external-memory algorithms for some geometric problems,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-017, 1998.
Abstract
We show that the well-known random incremental construction of Clarkson and Shor can be adapted via {\it gradations} to provide efficient external-memory algorithms for some geometric problems. In particular, as the main result, we obtain an optimal randomized algorithm for the problem of computing the trapezoidal decomposition determined by a set of $N$ line segments in the plane with $K$ pairwise intersections, that requires $\Theta(\frac{N}{B} \log_{M/B} \frac{N}{B} +\frac{K}{B})$ expected disk accesses, where $M$ is the size of the available internal memory and $B$ is the size of the block transfer. The approach is sufficiently general to obtain algorithms also for the problems of 3-d half-space intersections, 2-d and 3-d convex hulls, 2-d abstract Voronoi diagrams and batched planar point location, which require an optimal expected number of disk accesses and are simpler than the ones previously known. The results extend to an external-memory model with multiple disks. Additionally, under reasonable conditions on the parameters $N,M,B$, these results can be notably simplified originating practical algorithms which still achieve optimal expected bounds.
Export
BibTeX
@techreport{CrauserFerraginaMehlhornMeyerRamos98, TITLE = {Randomized external-memory algorithms for some geometric problems}, AUTHOR = {Crauser, Andreas and Ferragina, Paolo and Mehlhorn, Kurt and Meyer, Ulrich and Ramos, Edgar A.}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-017}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We show that the well-known random incremental construction of Clarkson and Shor can be adapted via {\it gradations} to provide efficient external-memory algorithms for some geometric problems. In particular, as the main result, we obtain an optimal randomized algorithm for the problem of computing the trapezoidal decomposition determined by a set of $N$ line segments in the plane with $K$ pairwise intersections, that requires $\Theta(\frac{N}{B} \log_{M/B} \frac{N}{B} +\frac{K}{B})$ expected disk accesses, where $M$ is the size of the available internal memory and $B$ is the size of the block transfer. The approach is sufficiently general to obtain algorithms also for the problems of 3-d half-space intersections, 2-d and 3-d convex hulls, 2-d abstract Voronoi diagrams and batched planar point location, which require an optimal expected number of disk accesses and are simpler than the ones previously known. The results extend to an external-memory model with multiple disks. Additionally, under reasonable conditions on the parameters $N,M,B$, these results can be notably simplified originating practical algorithms which still achieve optimal expected bounds.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Crauser, Andreas %A Ferragina, Paolo %A Mehlhorn, Kurt %A Meyer, Ulrich %A Ramos, Edgar A. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Randomized external-memory algorithms for some geometric problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BBB-C %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 27 p. %X We show that the well-known random incremental construction of Clarkson and Shor can be adapted via {\it gradations} to provide efficient external-memory algorithms for some geometric problems. In particular, as the main result, we obtain an optimal randomized algorithm for the problem of computing the trapezoidal decomposition determined by a set of $N$ line segments in the plane with $K$ pairwise intersections, that requires $\Theta(\frac{N}{B} \log_{M/B} \frac{N}{B} +\frac{K}{B})$ expected disk accesses, where $M$ is the size of the available internal memory and $B$ is the size of the block transfer. The approach is sufficiently general to obtain algorithms also for the problems of 3-d half-space intersections, 2-d and 3-d convex hulls, 2-d abstract Voronoi diagrams and batched planar point location, which require an optimal expected number of disk accesses and are simpler than the ones previously known. The results extend to an external-memory model with multiple disks. Additionally, under reasonable conditions on the parameters $N,M,B$, these results can be notably simplified originating practical algorithms which still achieve optimal expected bounds. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[108]
A. Crauser, K. Mehlhorn, E. Althaus, K. Brengel, T. Buchheit, J. Keller, H. Krone, O. Lambert, R. Schulte, S. Thiel, M. Westphal, and R. Wirth, “On the performance of LEDA-SM,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-028, 1998.
Abstract
We report on the performance of a library prototype for external memory algorithms and data structures called LEDA-SM, where SM is an acronym for secondary memory. Our library is based on LEDA and intended to complement it for large data. We present performance results of our external memory library prototype and compare these results with corresponding results of LEDAs in-core algorithms in virtual memory. The results show that even if only a small main memory is used for the external memory algorithms, they always outperform their in-core counterpart. Furthermore we compare different implementations of external memory data structures and algorithms.
Export
BibTeX
@techreport{CrauserMehlhornAlthausetal98, TITLE = {On the performance of {LEDA}-{SM}}, AUTHOR = {Crauser, Andreas and Mehlhorn, Kurt and Althaus, Ernst and Brengel, Klaus and Buchheit, Thomas and Keller, J{\"o}rg and Krone, Henning and Lambert, Oliver and Schulte, Ralph and Thiel, Sven and Westphal, Mark and Wirth, Robert}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-028}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We report on the performance of a library prototype for external memory algorithms and data structures called LEDA-SM, where SM is an acronym for secondary memory. Our library is based on LEDA and intended to complement it for large data. We present performance results of our external memory library prototype and compare these results with corresponding results of LEDAs in-core algorithms in virtual memory. The results show that even if only a small main memory is used for the external memory algorithms, they always outperform their in-core counterpart. Furthermore we compare different implementations of external memory data structures and algorithms.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Crauser, Andreas %A Mehlhorn, Kurt %A Althaus, Ernst %A Brengel, Klaus %A Buchheit, Thomas %A Keller, J&#246;rg %A Krone, Henning %A Lambert, Oliver %A Schulte, Ralph %A Thiel, Sven %A Westphal, Mark %A Wirth, Robert %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the performance of LEDA-SM : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B63-0 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 26 p. %X We report on the performance of a library prototype for external memory algorithms and data structures called LEDA-SM, where SM is an acronym for secondary memory. Our library is based on LEDA and intended to complement it for large data. We present performance results of our external memory library prototype and compare these results with corresponding results of LEDAs in-core algorithms in virtual memory. The results show that even if only a small main memory is used for the external memory algorithms, they always outperform their in-core counterpart. Furthermore we compare different implementations of external memory data structures and algorithms. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[109]
D. Dubhashi and D. Ranjan, “On positive influence and negative dependence,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-018, 1998.
Abstract
We study two notions of negative influence namely negative regression and negative association. We show that if a set of symmetric binary random variables are negatively regressed then they are necessarily negatively associated. The proof uses a lemma that is of independent interest and shows that every binary symmetric distribution has a variable of ``positive influence''. We also show that in general the notion of negative regression is different from that of negative association.
Export
BibTeX
@techreport{DubhashiRanjan98, TITLE = {On positive influence and negative dependence}, AUTHOR = {Dubhashi, Devdatt and Ranjan, Desh}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-018}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We study two notions of negative influence namely negative regression and negative association. We show that if a set of symmetric binary random variables are negatively regressed then they are necessarily negatively associated. The proof uses a lemma that is of independent interest and shows that every binary symmetric distribution has a variable of ``positive influence''. We also show that in general the notion of negative regression is different from that of negative association.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Dubhashi, Devdatt %A Ranjan, Desh %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On positive influence and negative dependence : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BAC-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 12 p. %X We study two notions of negative influence namely negative regression and negative association. We show that if a set of symmetric binary random variables are negatively regressed then they are necessarily negatively associated. The proof uses a lemma that is of independent interest and shows that every binary symmetric distribution has a variable of ``positive influence''. We also show that in general the notion of negative regression is different from that of negative association. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[110]
A. Fabri, G.-J. Giezeman, L. Kettner, S. Schirra, and S. Schönherr, “On the Design of CGAL, the Computational Geometry Algorithms Library,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-007, 1998.
Abstract
CGAL is a Computational Geometry Algorithms Library written in C++, which is developed in an ESPRIT LTR project. The goal is to make the large body of geometric algorithms developed in the field of computational geometry available for industrial application. In this chapter we discuss the major design goals for CGAL, which are correctness, flexibility, ease-of-use, efficiency, and robustness, and present our approach to reach these goals. Templates and the relatively new generic programming play a central role in the architecture of CGAL. We give a short introduction to generic programming in C++, compare it to the object-oriented programming paradigm, and present examples where both paradigms are used effectively in CGAL. Moreover, we give an overview on the current structure of the library and consider software engineering aspects in the CGAL-project.
Export
BibTeX
@techreport{FabriGiezemanKettnerSchirraSch'onherr, TITLE = {On the Design of {CGAL}, the Computational Geometry Algorithms Library}, AUTHOR = {Fabri, Andreas and Giezeman, Geert-Jan and Kettner, Lutz and Schirra, Stefan and Sch{\"o}nherr, Sven}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {CGAL is a Computational Geometry Algorithms Library written in C++, which is developed in an ESPRIT LTR project. The goal is to make the large body of geometric algorithms developed in the field of computational geometry available for industrial application. In this chapter we discuss the major design goals for CGAL, which are correctness, flexibility, ease-of-use, efficiency, and robustness, and present our approach to reach these goals. Templates and the relatively new generic programming play a central role in the architecture of CGAL. We give a short introduction to generic programming in C++, compare it to the object-oriented programming paradigm, and present examples where both paradigms are used effectively in CGAL. Moreover, we give an overview on the current structure of the library and consider software engineering aspects in the CGAL-project.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Fabri, Andreas %A Giezeman, Geert-Jan %A Kettner, Lutz %A Schirra, Stefan %A Sch&#246;nherr, Sven %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On the Design of CGAL, the Computational Geometry Algorithms Library : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BDF-D %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 31 p. %X CGAL is a Computational Geometry Algorithms Library written in C++, which is developed in an ESPRIT LTR project. The goal is to make the large body of geometric algorithms developed in the field of computational geometry available for industrial application. In this chapter we discuss the major design goals for CGAL, which are correctness, flexibility, ease-of-use, efficiency, and robustness, and present our approach to reach these goals. Templates and the relatively new generic programming play a central role in the architecture of CGAL. We give a short introduction to generic programming in C++, compare it to the object-oriented programming paradigm, and present examples where both paradigms are used effectively in CGAL. Moreover, we give an overview on the current structure of the library and consider software engineering aspects in the CGAL-project. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[111]
G. N. Frederickson and R. Solis-Oba, “Robustness analysis in combinatorial optimization,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-011, 1998.
Abstract
The robustness function of an optimization problem measures the maximum change in the value of its optimal solution that can be produced by changes of a given total magnitude on the values of the elements in its input. The problem of computing the robustness function of matroid optimization problems is studied under two cost models: the discrete model, which allows the removal of elements from the input, and the continuous model, which permits finite changes on the values of the elements in the input. For the discrete model, an $O(\log k)$-approximation algorithm is presented for computing the robustness function of minimum spanning trees, where $k$ is the number of edges to be removed. The algorithm uses as key subroutine a 2-approximation algorithm for the problem of dividing a graph into the maximum number of components by removing $k$ edges from it. For the continuous model, a number of results are presented. First, a general algorithm is given for computing the robustness function of any matroid. The algorithm runs in strongly polynomial time on matroids with a strongly polynomial time independence test. Faster algorithms are also presented for some particular classes of matroids: (1) an $O(n^3m^2 \log (n^2/m))$-time algorithm for graphic matroids, where $m$ is the number of elements in the matroid and $n$ is its rank, (2) an $O(mn(m+n^2)|E|\log(m^2/|E|+2))$-time algorithm for transversal matroids, where $|E|$ is a parameter of the matroid, (3) an $O(m^2n^2)$-time algorithm for scheduling matroids, and (4) an $O(m \log m)$-time algorithm for partition matroids. For this last class of matroids an optimal algorithm is also presented for evaluating the robustness function at a single point.
Export
BibTeX
@techreport{FredericksonSolis-Oba98, TITLE = {Robustness analysis in combinatorial optimization}, AUTHOR = {Frederickson, Greg N. and Solis-Oba, Roberto}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-011}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {The robustness function of an optimization problem measures the maximum change in the value of its optimal solution that can be produced by changes of a given total magnitude on the values of the elements in its input. The problem of computing the robustness function of matroid optimization problems is studied under two cost models: the discrete model, which allows the removal of elements from the input, and the continuous model, which permits finite changes on the values of the elements in the input. For the discrete model, an $O(\log k)$-approximation algorithm is presented for computing the robustness function of minimum spanning trees, where $k$ is the number of edges to be removed. The algorithm uses as key subroutine a 2-approximation algorithm for the problem of dividing a graph into the maximum number of components by removing $k$ edges from it. For the continuous model, a number of results are presented. First, a general algorithm is given for computing the robustness function of any matroid. The algorithm runs in strongly polynomial time on matroids with a strongly polynomial time independence test. Faster algorithms are also presented for some particular classes of matroids: (1) an $O(n^3m^2 \log (n^2/m))$-time algorithm for graphic matroids, where $m$ is the number of elements in the matroid and $n$ is its rank, (2) an $O(mn(m+n^2)|E|\log(m^2/|E|+2))$-time algorithm for transversal matroids, where $|E|$ is a parameter of the matroid, (3) an $O(m^2n^2)$-time algorithm for scheduling matroids, and (4) an $O(m \log m)$-time algorithm for partition matroids. For this last class of matroids an optimal algorithm is also presented for evaluating the robustness function at a single point.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Frederickson, Greg N. %A Solis-Oba, Roberto %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Robustness analysis in combinatorial optimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BD3-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 66 p. %X The robustness function of an optimization problem measures the maximum change in the value of its optimal solution that can be produced by changes of a given total magnitude on the values of the elements in its input. The problem of computing the robustness function of matroid optimization problems is studied under two cost models: the discrete model, which allows the removal of elements from the input, and the continuous model, which permits finite changes on the values of the elements in the input. For the discrete model, an $O(\log k)$-approximation algorithm is presented for computing the robustness function of minimum spanning trees, where $k$ is the number of edges to be removed. The algorithm uses as key subroutine a 2-approximation algorithm for the problem of dividing a graph into the maximum number of components by removing $k$ edges from it. For the continuous model, a number of results are presented. First, a general algorithm is given for computing the robustness function of any matroid. The algorithm runs in strongly polynomial time on matroids with a strongly polynomial time independence test. Faster algorithms are also presented for some particular classes of matroids: (1) an $O(n^3m^2 \log (n^2/m))$-time algorithm for graphic matroids, where $m$ is the number of elements in the matroid and $n$ is its rank, (2) an $O(mn(m+n^2)|E|\log(m^2/|E|+2))$-time algorithm for transversal matroids, where $|E|$ is a parameter of the matroid, (3) an $O(m^2n^2)$-time algorithm for scheduling matroids, and (4) an $O(m \log m)$-time algorithm for partition matroids. For this last class of matroids an optimal algorithm is also presented for evaluating the robustness function at a single point. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[112]
D. Frigioni, A. Marchetti-Spaccamela, and U. Nanni, “Fully dynamic shortest paths and negative cycle detection on diagraphs with Arbitrary Arc Weights,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-009, 1998.
Abstract
We study the problem of maintaining the distances and the shortest paths from a source node in a directed graph with arbitrary arc weights, when weight updates of arcs are performed. We propose algorithms that work for any digraph and have optimal space requirements and query time. If a negative--length cycle is introduced during weight decrease operations it is detected by the algorithms. The proposed algorithms explicitly deal with zero--length cycles. The cost of update operations depends on the class of the considered digraph and on the number of the output updates. We show that, if the digraph has a $k$-bounded accounting function (as in the case of digraphs with genus, arboricity, degree, treewidth or pagenumber bounded by $k$) the update procedures require $O(k\cdot n\cdot \log n)$ worst case time. In the case of digraphs with $n$ nodes and $m$ arcs $k=O(\sqrt{m})$, and hence we obtain $O(\sqrt{m}\cdot n \cdot \log n)$ worst case time per operation, which is better for a factor of $O(\sqrt{m} / \log n)$ than recomputing everything from scratch after each input update. If we perform also insertions and deletions of arcs all the above bounds become amortized.
Export
BibTeX
@techreport{FrigioniMarchetti-SpaccamelaNanni98, TITLE = {Fully dynamic shortest paths and negative cycle detection on diagraphs with Arbitrary Arc Weights}, AUTHOR = {Frigioni, Daniele and Marchetti-Spaccamela, A. and Nanni, U.}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-009}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We study the problem of maintaining the distances and the shortest paths from a source node in a directed graph with arbitrary arc weights, when weight updates of arcs are performed. We propose algorithms that work for any digraph and have optimal space requirements and query time. If a negative--length cycle is introduced during weight decrease operations it is detected by the algorithms. The proposed algorithms explicitly deal with zero--length cycles. The cost of update operations depends on the class of the considered digraph and on the number of the output updates. We show that, if the digraph has a $k$-bounded accounting function (as in the case of digraphs with genus, arboricity, degree, treewidth or pagenumber bounded by $k$) the update procedures require $O(k\cdot n\cdot \log n)$ worst case time. In the case of digraphs with $n$ nodes and $m$ arcs $k=O(\sqrt{m})$, and hence we obtain $O(\sqrt{m}\cdot n \cdot \log n)$ worst case time per operation, which is better for a factor of $O(\sqrt{m} / \log n)$ than recomputing everything from scratch after each input update. If we perform also insertions and deletions of arcs all the above bounds become amortized.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Frigioni, Daniele %A Marchetti-Spaccamela, A. %A Nanni, U. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Fully dynamic shortest paths and negative cycle detection on diagraphs with Arbitrary Arc Weights : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BD9-A %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 18 p. %X We study the problem of maintaining the distances and the shortest paths from a source node in a directed graph with arbitrary arc weights, when weight updates of arcs are performed. We propose algorithms that work for any digraph and have optimal space requirements and query time. If a negative--length cycle is introduced during weight decrease operations it is detected by the algorithms. The proposed algorithms explicitly deal with zero--length cycles. The cost of update operations depends on the class of the considered digraph and on the number of the output updates. We show that, if the digraph has a $k$-bounded accounting function (as in the case of digraphs with genus, arboricity, degree, treewidth or pagenumber bounded by $k$) the update procedures require $O(k\cdot n\cdot \log n)$ worst case time. In the case of digraphs with $n$ nodes and $m$ arcs $k=O(\sqrt{m})$, and hence we obtain $O(\sqrt{m}\cdot n \cdot \log n)$ worst case time per operation, which is better for a factor of $O(\sqrt{m} / \log n)$ than recomputing everything from scratch after each input update. If we perform also insertions and deletions of arcs all the above bounds become amortized. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[113]
T. Hagerup, “Simpler and faster static AC$^0$ dictionaries,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-001, 1998.
Abstract
We consider the static dictionary problem of using $O(n)$ $w$-bit words to store $n$ $w$-bit keys for fast retrieval on a $w$-bit \ACz\ RAM, i.e., on a RAM with a word length of $w$ bits whose instruction set is arbitrary, except that each instruction must be realizable through an unbounded-fanin circuit of constant depth and $w^{O(1)}$ size, and that the instruction set must be finite and independent of the keys stored. We improve the best known upper bounds for moderate values of~$w$ relative to $n$. If ${w/{\log n}}=(\log\log n)^{O(1)}$, query time $(\log\log\log n)^{O(1)}$ is achieved, and if additionally ${w/{\log n}}\ge(\log\log n)^{1+\epsilon}$ for some fixed $\epsilon>0$, the query time is constant. For both of these special cases, the best previous upper bound was $O(\log\log n)$.
Export
BibTeX
@techreport{Torben98, TITLE = {Simpler and faster static {AC}\${\textasciicircum}0\$ dictionaries}, AUTHOR = {Hagerup, Torben}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We consider the static dictionary problem of using $O(n)$ $w$-bit words to store $n$ $w$-bit keys for fast retrieval on a $w$-bit \ACz\ RAM, i.e., on a RAM with a word length of $w$ bits whose instruction set is arbitrary, except that each instruction must be realizable through an unbounded-fanin circuit of constant depth and $w^{O(1)}$ size, and that the instruction set must be finite and independent of the keys stored. We improve the best known upper bounds for moderate values of~$w$ relative to $n$. If ${w/{\log n}}=(\log\log n)^{O(1)}$, query time $(\log\log\log n)^{O(1)}$ is achieved, and if additionally ${w/{\log n}}\ge(\log\log n)^{1+\epsilon}$ for some fixed $\epsilon>0$, the query time is constant. For both of these special cases, the best previous upper bound was $O(\log\log n)$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Hagerup, Torben %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Simpler and faster static AC$^0$ dictionaries : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9A0E-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 13 p. %X We consider the static dictionary problem of using $O(n)$ $w$-bit words to store $n$ $w$-bit keys for fast retrieval on a $w$-bit \ACz\ RAM, i.e., on a RAM with a word length of $w$ bits whose instruction set is arbitrary, except that each instruction must be realizable through an unbounded-fanin circuit of constant depth and $w^{O(1)}$ size, and that the instruction set must be finite and independent of the keys stored. We improve the best known upper bounds for moderate values of~$w$ relative to $n$. If ${w/{\log n}}=(\log\log n)^{O(1)}$, query time $(\log\log\log n)^{O(1)}$ is achieved, and if additionally ${w/{\log n}}\ge(\log\log n)^{1+\epsilon}$ for some fixed $\epsilon>0$, the query time is constant. For both of these special cases, the best previous upper bound was $O(\log\log n)$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[114]
M. R. Henzinger and S. Leonardi, “Scheduling multicasts on unit-capacity trees and meshes,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-015, 1998.
Abstract
This paper studies the multicast routing and admission control problem on unit-capacity tree and mesh topologies in the throughput-model. The problem is a generalization of the edge-disjoint paths problem and is NP-hard both on trees and meshes. We study both the offline and the online version of the problem: In the offline setting, we give the first constant-factor approximation algorithm for trees, and an O((log log n)^2)-factor approximation algorithm for meshes. In the online setting, we give the first polylogarithmic competitive online algorithm for tree and mesh topologies. No polylogarithmic-competitive algorithm is possible on general network topologies [Bartal,Fiat,Leonardi, 96], and there exists a polylogarithmic lower bound on the competitive ratio of any online algorithm on tree topologies [Awerbuch,Azar,Fiat,Leighton, 96]. We prove the same lower bound for meshes.
Export
BibTeX
@techreport{HenzingerLeonardi98, TITLE = {Scheduling multicasts on unit-capacity trees and meshes}, AUTHOR = {Henzinger, Monika R. and Leonardi, Stefano}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-015}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {This paper studies the multicast routing and admission control problem on unit-capacity tree and mesh topologies in the throughput-model. The problem is a generalization of the edge-disjoint paths problem and is NP-hard both on trees and meshes. We study both the offline and the online version of the problem: In the offline setting, we give the first constant-factor approximation algorithm for trees, and an O((log log n)^2)-factor approximation algorithm for meshes. In the online setting, we give the first polylogarithmic competitive online algorithm for tree and mesh topologies. No polylogarithmic-competitive algorithm is possible on general network topologies [Bartal,Fiat,Leonardi, 96], and there exists a polylogarithmic lower bound on the competitive ratio of any online algorithm on tree topologies [Awerbuch,Azar,Fiat,Leighton, 96]. We prove the same lower bound for meshes.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Henzinger, Monika R. %A Leonardi, Stefano %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Scheduling multicasts on unit-capacity trees and meshes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BC5-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 38 p. %X This paper studies the multicast routing and admission control problem on unit-capacity tree and mesh topologies in the throughput-model. The problem is a generalization of the edge-disjoint paths problem and is NP-hard both on trees and meshes. We study both the offline and the online version of the problem: In the offline setting, we give the first constant-factor approximation algorithm for trees, and an O((log log n)^2)-factor approximation algorithm for meshes. In the online setting, we give the first polylogarithmic competitive online algorithm for tree and mesh topologies. No polylogarithmic-competitive algorithm is possible on general network topologies [Bartal,Fiat,Leonardi, 96], and there exists a polylogarithmic lower bound on the competitive ratio of any online algorithm on tree topologies [Awerbuch,Azar,Fiat,Leighton, 96]. We prove the same lower bound for meshes. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[115]
K. Jansen and L. Porkolab, “Improved approximation schemes for scheduling unrelated parallel machines,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-026, 1998.
Abstract
We consider the problem of scheduling $n$ independent jobs on $m$ unrelated parallel machines. Each job has to be processed by exactly one machine, processing job $j$ on machine $i$ requires $p_{ij}$ time units, and the objective is to minimize the makespan, i.e. the maximum job completion time. We focus on the case when $m$ is fixed and develop a fully polynomial approximation scheme whose running time depends only linearly on $n$. In the second half of the paper we extend this result to a variant of the problem, where processing job $j$ on machine $i$ also incurs a cost of $c_{ij}$, and thus there are two optimization criteria: makespan and cost. We show that for any fixed $m$, there is a fully polynomial approximation scheme that, given values $T$ and $C$, computes for any fixed $\epsilon > 0$ a schedule in $O(n)$ time with makespan at most $(1+\epsilon)T$ and cost at most $(1 + \epsilon)C$, if there exists a schedule of makespan $T$ and cost $C$.
Export
BibTeX
@techreport{JansenPorkolab98-1-026, TITLE = {Improved approximation schemes for scheduling unrelated parallel machines}, AUTHOR = {Jansen, Klaus and Porkolab, Lorant}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-026}, NUMBER = {MPI-I-1998-1-026}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We consider the problem of scheduling $n$ independent jobs on $m$ unrelated parallel machines. Each job has to be processed by exactly one machine, processing job $j$ on machine $i$ requires $p_{ij}$ time units, and the objective is to minimize the makespan, i.e. the maximum job completion time. We focus on the case when $m$ is fixed and develop a fully polynomial approximation scheme whose running time depends only linearly on $n$. In the second half of the paper we extend this result to a variant of the problem, where processing job $j$ on machine $i$ also incurs a cost of $c_{ij}$, and thus there are two optimization criteria: makespan and cost. We show that for any fixed $m$, there is a fully polynomial approximation scheme that, given values $T$ and $C$, computes for any fixed $\epsilon > 0$ a schedule in $O(n)$ time with makespan at most $(1+\epsilon)T$ and cost at most $(1 + \epsilon)C$, if there exists a schedule of makespan $T$ and cost $C$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Jansen, Klaus %A Porkolab, Lorant %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Improved approximation schemes for scheduling unrelated parallel machines : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B69-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-026 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 14 p. %X We consider the problem of scheduling $n$ independent jobs on $m$ unrelated parallel machines. Each job has to be processed by exactly one machine, processing job $j$ on machine $i$ requires $p_{ij}$ time units, and the objective is to minimize the makespan, i.e. the maximum job completion time. We focus on the case when $m$ is fixed and develop a fully polynomial approximation scheme whose running time depends only linearly on $n$. In the second half of the paper we extend this result to a variant of the problem, where processing job $j$ on machine $i$ also incurs a cost of $c_{ij}$, and thus there are two optimization criteria: makespan and cost. We show that for any fixed $m$, there is a fully polynomial approximation scheme that, given values $T$ and $C$, computes for any fixed $\epsilon > 0$ a schedule in $O(n)$ time with makespan at most $(1+\epsilon)T$ and cost at most $(1 + \epsilon)C$, if there exists a schedule of makespan $T$ and cost $C$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[116]
K. Jansen, “A new characterization for parity graphs and a coloring problem with costs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-006, 1998.
Abstract
In this paper, we give a characterization for parity graphs. A graph is a parity graph, if and only if for every pair of vertices all minimal chains joining them have the same parity. We prove that $G$ is a parity graph, if and only if the cartesian product $G \times K_2$ is a perfect graph. Furthermore, as a consequence we get a result for the polyhedron corresponding to an integer linear program formulation of a coloring problem with costs. For the case that the costs $k_{v,3} = k_{v,c}$ for each color $c \ge 3$ and vertex $v \in V$, we show that the polyhedron contains only integral $0 / 1$ extrema if and only if the graph $G$ is a parity graph.
Export
BibTeX
@techreport{Jansen98-1-006, TITLE = {A new characterization for parity graphs and a coloring problem with costs}, AUTHOR = {Jansen, Klaus}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {In this paper, we give a characterization for parity graphs. A graph is a parity graph, if and only if for every pair of vertices all minimal chains joining them have the same parity. We prove that $G$ is a parity graph, if and only if the cartesian product $G \times K_2$ is a perfect graph. Furthermore, as a consequence we get a result for the polyhedron corresponding to an integer linear program formulation of a coloring problem with costs. For the case that the costs $k_{v,3} = k_{v,c}$ for each color $c \ge 3$ and vertex $v \in V$, we show that the polyhedron contains only integral $0 / 1$ extrema if and only if the graph $G$ is a parity graph.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Jansen, Klaus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A new characterization for parity graphs and a coloring problem with costs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BE2-3 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 16 p. %X In this paper, we give a characterization for parity graphs. A graph is a parity graph, if and only if for every pair of vertices all minimal chains joining them have the same parity. We prove that $G$ is a parity graph, if and only if the cartesian product $G \times K_2$ is a perfect graph. Furthermore, as a consequence we get a result for the polyhedron corresponding to an integer linear program formulation of a coloring problem with costs. For the case that the costs $k_{v,3} = k_{v,c}$ for each color $c \ge 3$ and vertex $v \in V$, we show that the polyhedron contains only integral $0 / 1$ extrema if and only if the graph $G$ is a parity graph. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[117]
K. Jansen and L. Porkolab, “Linear-time approximation schemes for scheduling malleable parallel tasks,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-025, 1998.
Abstract
A malleable parallel task is one whose execution time is a function of the number of (identical) processors alloted to it. We study the problem of scheduling a set of $n$ independent malleable tasks on a fixed number of parallel processors, and propose an approximation scheme that for any fixed $\epsilon > 0$, computes in $O(n)$ time a non-preemptive schedule of length at most $(1+\epsilon)$ times the optimum.
Export
BibTeX
@techreport{JansenPorkolab98-1-025, TITLE = {Linear-time approximation schemes for scheduling malleable parallel tasks}, AUTHOR = {Jansen, Klaus and Porkolab, Lorant}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-025}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {A malleable parallel task is one whose execution time is a function of the number of (identical) processors alloted to it. We study the problem of scheduling a set of $n$ independent malleable tasks on a fixed number of parallel processors, and propose an approximation scheme that for any fixed $\epsilon > 0$, computes in $O(n)$ time a non-preemptive schedule of length at most $(1+\epsilon)$ times the optimum.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Jansen, Klaus %A Porkolab, Lorant %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Linear-time approximation schemes for scheduling malleable parallel tasks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B6C-D %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 15 p. %X A malleable parallel task is one whose execution time is a function of the number of (identical) processors alloted to it. We study the problem of scheduling a set of $n$ independent malleable tasks on a fixed number of parallel processors, and propose an approximation scheme that for any fixed $\epsilon > 0$, computes in $O(n)$ time a non-preemptive schedule of length at most $(1+\epsilon)$ times the optimum. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[118]
K. Jansen, “The mutual exclusion scheduling problem for permutation and comparability graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-005, 1998.
Abstract
In this paper, we consider the mutual exclusion scheduling problem for comparability graphs. Given an undirected graph $G$ and a fixed constant $m$, the problem is to find a minimum coloring of $G$ such that each color is used at most $m$ times. The complexity of this problem for comparability graphs was mentioned as an open problem by M\"ohring (1985) and for permutation graphs (a subclass of comparability graphs) as an open problem by Lonc (1991). We prove that this problem is already NP-complete for permutation graphs and for each fixed constant $m \ge 6$.
Export
BibTeX
@techreport{Jansen98-1-005, TITLE = {The mutual exclusion scheduling problem for permutation and comparability graphs}, AUTHOR = {Jansen, Klaus}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {In this paper, we consider the mutual exclusion scheduling problem for comparability graphs. Given an undirected graph $G$ and a fixed constant $m$, the problem is to find a minimum coloring of $G$ such that each color is used at most $m$ times. The complexity of this problem for comparability graphs was mentioned as an open problem by M\"ohring (1985) and for permutation graphs (a subclass of comparability graphs) as an open problem by Lonc (1991). We prove that this problem is already NP-complete for permutation graphs and for each fixed constant $m \ge 6$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Jansen, Klaus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The mutual exclusion scheduling problem for permutation and comparability graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BE5-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 12 p. %X In this paper, we consider the mutual exclusion scheduling problem for comparability graphs. Given an undirected graph $G$ and a fixed constant $m$, the problem is to find a minimum coloring of $G$ such that each color is used at most $m$ times. The complexity of this problem for comparability graphs was mentioned as an open problem by M\"ohring (1985) and for permutation graphs (a subclass of comparability graphs) as an open problem by Lonc (1991). We prove that this problem is already NP-complete for permutation graphs and for each fixed constant $m \ge 6$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[119]
M. Jünger, S. Leipert, and P. Mutzel, “A note on computing a maximal planar subgraph using PQ-trees,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-008, 1998.
Abstract
The problem of computing a maximal planar subgraph of a non planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help of PQ-trees. The latest attempt has been reported by Jayakumar et al. [10]. In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We show that it does not necessarily compute a maximal planar subgraph and we note that the same holds for a modified version of the algorithm presented by Kant [12]. Our conclusions most likely suggest not to use PQ-trees at all for this specific problem.
Export
BibTeX
@techreport{J'ungerLeipertMutzel98, TITLE = {A note on computing a maximal planar subgraph using {PQ}-trees}, AUTHOR = {J{\"u}nger, Michael and Leipert, Sebastian and Mutzel, Petra}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {The problem of computing a maximal planar subgraph of a non planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help of PQ-trees. The latest attempt has been reported by Jayakumar et al. [10]. In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We show that it does not necessarily compute a maximal planar subgraph and we note that the same holds for a modified version of the algorithm presented by Kant [12]. Our conclusions most likely suggest not to use PQ-trees at all for this specific problem.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A J&#252;nger, Michael %A Leipert, Sebastian %A Mutzel, Petra %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A note on computing a maximal planar subgraph using PQ-trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BDC-4 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 5 p. %X The problem of computing a maximal planar subgraph of a non planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help of PQ-trees. The latest attempt has been reported by Jayakumar et al. [10]. In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We show that it does not necessarily compute a maximal planar subgraph and we note that the same holds for a modified version of the algorithm presented by Kant [12]. Our conclusions most likely suggest not to use PQ-trees at all for this specific problem. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[120]
G. W. Klau and P. Mutzel, “Optimal compaction of orthogonal grid drawings,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-031, 1998.
Abstract
We consider the two--dimensional compaction problem for orthogonal grid drawings in which the task is to alter the coordinates of the vertices and edge segments while preserving the shape of the drawing so that the total edge length is minimized. The problem is closely related to two--dimensional compaction in {\sc VLSI}--design and is conjectured to be {\sl NP}--hard. We characterize the set of feasible solutions for the two--dimensional compaction problem in terms of paths in the so--called constraint graphs in $x$-- and $y$--direction. Similar graphs (known as {\em layout graphs}) have already been used for one--dimensional compaction in {\sc VLSI}--design, but this is the first time that a direct connection between these graphs is established. Given the pair of constraint graphs, the two--dimensional compaction task can be viewed as extending these graphs by new arcs so that certain conditions are satisfied and the total edge length is minimized. We can recognize those instances having only one such extension; for these cases we can solve the compaction problem in polynomial time. We have transformed the geometrical problem into a graph--theoretical one which can be formulated as an integer linear program. Our computational experiments have shown that the new approach works well in practice. It is the first time that the two--dimensional compaction problem is formulated as an integer linear program.
Export
BibTeX
@techreport{KlauMutzel98-1-031, TITLE = {Optimal compaction of orthogonal grid drawings}, AUTHOR = {Klau, Gunnar W. and Mutzel, Petra}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-031}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We consider the two--dimensional compaction problem for orthogonal grid drawings in which the task is to alter the coordinates of the vertices and edge segments while preserving the shape of the drawing so that the total edge length is minimized. The problem is closely related to two--dimensional compaction in {\sc VLSI}--design and is conjectured to be {\sl NP}--hard. We characterize the set of feasible solutions for the two--dimensional compaction problem in terms of paths in the so--called constraint graphs in $x$-- and $y$--direction. Similar graphs (known as {\em layout graphs}) have already been used for one--dimensional compaction in {\sc VLSI}--design, but this is the first time that a direct connection between these graphs is established. Given the pair of constraint graphs, the two--dimensional compaction task can be viewed as extending these graphs by new arcs so that certain conditions are satisfied and the total edge length is minimized. We can recognize those instances having only one such extension; for these cases we can solve the compaction problem in polynomial time. We have transformed the geometrical problem into a graph--theoretical one which can be formulated as an integer linear program. Our computational experiments have shown that the new approach works well in practice. It is the first time that the two--dimensional compaction problem is formulated as an integer linear program.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Klau, Gunnar W. %A Mutzel, Petra %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Optimal compaction of orthogonal grid drawings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B5A-6 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 20 p. %X We consider the two--dimensional compaction problem for orthogonal grid drawings in which the task is to alter the coordinates of the vertices and edge segments while preserving the shape of the drawing so that the total edge length is minimized. The problem is closely related to two--dimensional compaction in {\sc VLSI}--design and is conjectured to be {\sl NP}--hard. We characterize the set of feasible solutions for the two--dimensional compaction problem in terms of paths in the so--called constraint graphs in $x$-- and $y$--direction. Similar graphs (known as {\em layout graphs}) have already been used for one--dimensional compaction in {\sc VLSI}--design, but this is the first time that a direct connection between these graphs is established. Given the pair of constraint graphs, the two--dimensional compaction task can be viewed as extending these graphs by new arcs so that certain conditions are satisfied and the total edge length is minimized. We can recognize those instances having only one such extension; for these cases we can solve the compaction problem in polynomial time. We have transformed the geometrical problem into a graph--theoretical one which can be formulated as an integer linear program. Our computational experiments have shown that the new approach works well in practice. It is the first time that the two--dimensional compaction problem is formulated as an integer linear program. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[121]
G. W. Klau and P. Mutzel, “Quasi-orthogonal drawing of planar graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-013, 1998.
Abstract
Orthogonal drawings of graphs are highly accepted in practice. For planar graphs with vertex degree of at most four, Tamassia gives a polynomial time algorithm which computes a region preserving orthogonal grid embedding with the minimum number of bends. However, the graphs arising in practical applications rarely have bounded vertex degree. In order to cope with general planar graphs, we introduce the quasi--orthogonal drawing model. In this model, vertices are drawn on grid points, and edges follow the grid paths except around vertices of high degree. Furthermore we present an extension of Tamassia's algorithm that constructs quasi--orthogonal drawings. We compare the drawings to those obtained using related approaches.
Export
BibTeX
@techreport{KlauMutzel98-1-013, TITLE = {Quasi-orthogonal drawing of planar graphs}, AUTHOR = {Klau, Gunnar W. and Mutzel, Petra}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-013}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {Orthogonal drawings of graphs are highly accepted in practice. For planar graphs with vertex degree of at most four, Tamassia gives a polynomial time algorithm which computes a region preserving orthogonal grid embedding with the minimum number of bends. However, the graphs arising in practical applications rarely have bounded vertex degree. In order to cope with general planar graphs, we introduce the quasi--orthogonal drawing model. In this model, vertices are drawn on grid points, and edges follow the grid paths except around vertices of high degree. Furthermore we present an extension of Tamassia's algorithm that constructs quasi--orthogonal drawings. We compare the drawings to those obtained using related approaches.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Klau, Gunnar W. %A Mutzel, Petra %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Quasi-orthogonal drawing of planar graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BCC-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 15 p. %X Orthogonal drawings of graphs are highly accepted in practice. For planar graphs with vertex degree of at most four, Tamassia gives a polynomial time algorithm which computes a region preserving orthogonal grid embedding with the minimum number of bends. However, the graphs arising in practical applications rarely have bounded vertex degree. In order to cope with general planar graphs, we introduce the quasi--orthogonal drawing model. In this model, vertices are drawn on grid points, and edges follow the grid paths except around vertices of high degree. Furthermore we present an extension of Tamassia's algorithm that constructs quasi--orthogonal drawings. We compare the drawings to those obtained using related approaches. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[122]
P. Krysta and K. Lorys, “New approximation algorithms for the achromatic number,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-016, 1998.
Abstract
The achromatic number of a graph is the greatest number of colors in a coloring of the vertices of the graph such that adjacent vertices get distinct colors and for every pair of colors some vertex of the first color and some vertex of the second color are adjacent. The problem of computing this number is NP-complete for general graphs as proved by Yannakakis and Gavril 1980. The problem is also NP-complete for trees, that was proved by Cairnie and Edwards 1997. Chaudhary and Vishwanathan 1997 gave recently a $7$-approximation algorithm for this problem on trees, and an $O(\sqrt{n})$-approximation algorithm for the problem on graphs with girth (length of the shortest cycle) at least six. We present the first $2$-approximation algorithm for the problem on trees. This is a new algorithm based on different ideas than one by Chaudhary and Vishwanathan 1997. We then give a $1.15$-approximation algorithm for the problem on binary trees and a $1.58$-approximation for the problem on trees of constant degree. We show that the algorithms for constant degree trees can be implemented in linear time. We also present the first $O(n^{3/8})$-approximation algorithm for the problem on graphs with girth at least six. Our algorithms are based on an interesting tree partitioning technique. Moreover, we improve the lower bound of Farber {\em et al.} 1986 for the achromatic number of trees with degree bounded by three.
Export
BibTeX
@techreport{KrystaLorys98-1-016, TITLE = {New approximation algorithms for the achromatic number}, AUTHOR = {Krysta, Piotr and Lorys, Krzysztof}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-016}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {The achromatic number of a graph is the greatest number of colors in a coloring of the vertices of the graph such that adjacent vertices get distinct colors and for every pair of colors some vertex of the first color and some vertex of the second color are adjacent. The problem of computing this number is NP-complete for general graphs as proved by Yannakakis and Gavril 1980. The problem is also NP-complete for trees, that was proved by Cairnie and Edwards 1997. Chaudhary and Vishwanathan 1997 gave recently a $7$-approximation algorithm for this problem on trees, and an $O(\sqrt{n})$-approximation algorithm for the problem on graphs with girth (length of the shortest cycle) at least six. We present the first $2$-approximation algorithm for the problem on trees. This is a new algorithm based on different ideas than one by Chaudhary and Vishwanathan 1997. We then give a $1.15$-approximation algorithm for the problem on binary trees and a $1.58$-approximation for the problem on trees of constant degree. We show that the algorithms for constant degree trees can be implemented in linear time. We also present the first $O(n^{3/8})$-approximation algorithm for the problem on graphs with girth at least six. Our algorithms are based on an interesting tree partitioning technique. Moreover, we improve the lower bound of Farber {\em et al.} 1986 for the achromatic number of trees with degree bounded by three.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Krysta, Piotr %A Lorys, Krzysztof %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T New approximation algorithms for the achromatic number : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BC1-D %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 26 p. %X The achromatic number of a graph is the greatest number of colors in a coloring of the vertices of the graph such that adjacent vertices get distinct colors and for every pair of colors some vertex of the first color and some vertex of the second color are adjacent. The problem of computing this number is NP-complete for general graphs as proved by Yannakakis and Gavril 1980. The problem is also NP-complete for trees, that was proved by Cairnie and Edwards 1997. Chaudhary and Vishwanathan 1997 gave recently a $7$-approximation algorithm for this problem on trees, and an $O(\sqrt{n})$-approximation algorithm for the problem on graphs with girth (length of the shortest cycle) at least six. We present the first $2$-approximation algorithm for the problem on trees. This is a new algorithm based on different ideas than one by Chaudhary and Vishwanathan 1997. We then give a $1.15$-approximation algorithm for the problem on binary trees and a $1.58$-approximation for the problem on trees of constant degree. We show that the algorithms for constant degree trees can be implemented in linear time. We also present the first $O(n^{3/8})$-approximation algorithm for the problem on graphs with girth at least six. Our algorithms are based on an interesting tree partitioning technique. Moreover, we improve the lower bound of Farber {\em et al.} 1986 for the achromatic number of trees with degree bounded by three. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[123]
S. Mahajan, E. A. Ramos, and K. V. Subrahmanyam, “Solving some discrepancy problems in NC*,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-012, 1998.
Export
BibTeX
@techreport{MahajanRamosSubrahmanyam98, TITLE = {Solving some discrepancy problems in {NC}*}, AUTHOR = {Mahajan, Sanjeev and Ramos, Edgar A. and Subrahmanyam, K. V.}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-012}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Mahajan, Sanjeev %A Ramos, Edgar A. %A Subrahmanyam, K. V. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Solving some discrepancy problems in NC* : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BD0-B %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 21 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[124]
K. Mehlhorn, Ed., “2nd Workshop on Algorithm Engineering WAE ’98 -- Proceedings,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-019, 1998.
Export
BibTeX
@techreport{MehlhornWAE98, TITLE = {2nd Workshop on Algorithm Engineering {WAE} '98 -- Proceedings}, EDITOR = {Mehlhorn, Kurt}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-019}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, TYPE = {Research Report}, }
Endnote
%0 Report %E Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T 2nd Workshop on Algorithm Engineering WAE '98 -- Proceedings : %O WAE 1998 %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A388-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 213 p. %B Research Report
[125]
U. Meyer and J. Sibeyn, “Time-independent gossiping on full-port tori,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-014, 1998.
Abstract
Near-optimal gossiping algorithms are given for two- and higher dimensional tori. It is assumed that the amount of data each PU contributes is so large that start-up time may be neglected. For two-dimensional tori, a previous algorithm achieved optimality in an intricate way, with a time-dependent routing pattern. In all steps of our algorithms, the PUs forward the received packets in the same way.
Export
BibTeX
@techreport{UlrichSibeyn98, TITLE = {Time-independent gossiping on full-port tori}, AUTHOR = {Meyer, Ulrich and Sibeyn, Jop}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-014}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {Near-optimal gossiping algorithms are given for two- and higher dimensional tori. It is assumed that the amount of data each PU contributes is so large that start-up time may be neglected. For two-dimensional tori, a previous algorithm achieved optimality in an intricate way, with a time-dependent routing pattern. In all steps of our algorithms, the PUs forward the received packets in the same way.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Meyer, Ulrich %A Sibeyn, Jop %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Time-independent gossiping on full-port tori : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BC9-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 20 p. %X Near-optimal gossiping algorithms are given for two- and higher dimensional tori. It is assumed that the amount of data each PU contributes is so large that start-up time may be neglected. For two-dimensional tori, a previous algorithm achieved optimality in an intricate way, with a time-dependent routing pattern. In all steps of our algorithms, the PUs forward the received packets in the same way. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[126]
P. Mutzel and R. Weiskircher, “Optimizing over all combinatorial embeddings of a planar graph,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-029, 1998.
Abstract
Optimizng Over All Combinatorial Embeddings of a Planar Graph". We study the problem of optimizing over the set of all combinatorial embeddings of a given planar graph. Our objective function prefers certain cycles of $G$ as face cycles in the embedding. The motivation for studying this problem arises in graph drawing, where the chosen embedding has an important influence on the aesthetics of the drawing. We characterize the set of all possible embeddings of a given biconnected planar graph $G$ by means of a system of linear inequalities with ${0,1}$-variables corresponding to the set of those cycles in $G$ which can appear in a combinatorial embedding. This system of linear inequalities can be constructed recursively using the data structure of SPQR-trees and a new splitting operation. Our computational results on two benchmark sets of graphs are surprising: The number of variables and constraints seems to grow only linearly with the size of the graphs although the number of embeddings grows exponentially. For all tested graphs (up to 500 vertices) and linear objective functions, the resulting integer linear programs could be generated within 600 seconds and solved within two seconds on a Sun Enterprise 10000 using CPLEX.
Export
BibTeX
@techreport{MutzelWeiskircher98, TITLE = {Optimizing over all combinatorial embeddings of a planar graph}, AUTHOR = {Mutzel, Petra and Weiskircher, Ren{\'e}}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-029}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {Optimizng Over All Combinatorial Embeddings of a Planar Graph". We study the problem of optimizing over the set of all combinatorial embeddings of a given planar graph. Our objective function prefers certain cycles of $G$ as face cycles in the embedding. The motivation for studying this problem arises in graph drawing, where the chosen embedding has an important influence on the aesthetics of the drawing. We characterize the set of all possible embeddings of a given biconnected planar graph $G$ by means of a system of linear inequalities with ${0,1}$-variables corresponding to the set of those cycles in $G$ which can appear in a combinatorial embedding. This system of linear inequalities can be constructed recursively using the data structure of SPQR-trees and a new splitting operation. Our computational results on two benchmark sets of graphs are surprising: The number of variables and constraints seems to grow only linearly with the size of the graphs although the number of embeddings grows exponentially. For all tested graphs (up to 500 vertices) and linear objective functions, the resulting integer linear programs could be generated within 600 seconds and solved within two seconds on a Sun Enterprise 10000 using CPLEX.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Mutzel, Petra %A Weiskircher, Ren&#233; %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Optimizing over all combinatorial embeddings of a planar graph : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B66-A %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 23 p. %X Optimizng Over All Combinatorial Embeddings of a Planar Graph". We study the problem of optimizing over the set of all combinatorial embeddings of a given planar graph. Our objective function prefers certain cycles of $G$ as face cycles in the embedding. The motivation for studying this problem arises in graph drawing, where the chosen embedding has an important influence on the aesthetics of the drawing. We characterize the set of all possible embeddings of a given biconnected planar graph $G$ by means of a system of linear inequalities with ${0,1}$-variables corresponding to the set of those cycles in $G$ which can appear in a combinatorial embedding. This system of linear inequalities can be constructed recursively using the data structure of SPQR-trees and a new splitting operation. Our computational results on two benchmark sets of graphs are surprising: The number of variables and constraints seems to grow only linearly with the size of the graphs although the number of embeddings grows exponentially. For all tested graphs (up to 500 vertices) and linear objective functions, the resulting integer linear programs could be generated within 600 seconds and solved within two seconds on a Sun Enterprise 10000 using CPLEX. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[127]
C. Rüb, “On Wallace’s method for the generation of normal variates,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-020, 1998.
Abstract
A method proposed by Wallace for the generation of normal random variates is examined. His method works by transforming a pool of numbers from the normal distribution into a new pool of number. This is in contrast to almost all other known methods that transform one or more variates from the uniform distribution into one or more variates from the normal distribution. Unfortunately, a direct implementation of Wallace's method has a serious flaw: if consecutive numbers produced by this method are added, the resulting variate, which should also be normally distributed, will show a significant deviation from the expected behavior. Wallace's method is analyzed with respect to this deficiency and simple modifications are proposed that lead to variates of better quality. It is argued that more randomness (that is, more uniform random numbers) is needed in the transformation process to improve the quality of the numbers generated. However, an implementation of the modified method has still small deviations from the expected behavior and its running time is much higher than that of the original.
Export
BibTeX
@techreport{Rub98, TITLE = {On Wallace's method for the generation of normal variates}, AUTHOR = {R{\"u}b, Christine}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-020}, NUMBER = {MPI-I-1998-1-020}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {A method proposed by Wallace for the generation of normal random variates is examined. His method works by transforming a pool of numbers from the normal distribution into a new pool of number. This is in contrast to almost all other known methods that transform one or more variates from the uniform distribution into one or more variates from the normal distribution. Unfortunately, a direct implementation of Wallace's method has a serious flaw: if consecutive numbers produced by this method are added, the resulting variate, which should also be normally distributed, will show a significant deviation from the expected behavior. Wallace's method is analyzed with respect to this deficiency and simple modifications are proposed that lead to variates of better quality. It is argued that more randomness (that is, more uniform random numbers) is needed in the transformation process to improve the quality of the numbers generated. However, an implementation of the modified method has still small deviations from the expected behavior and its running time is much higher than that of the original.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A R&#252;b, Christine %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Wallace's method for the generation of normal variates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7B9B-3 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-020 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 17 p. %X A method proposed by Wallace for the generation of normal random variates is examined. His method works by transforming a pool of numbers from the normal distribution into a new pool of number. This is in contrast to almost all other known methods that transform one or more variates from the uniform distribution into one or more variates from the normal distribution. Unfortunately, a direct implementation of Wallace's method has a serious flaw: if consecutive numbers produced by this method are added, the resulting variate, which should also be normally distributed, will show a significant deviation from the expected behavior. Wallace's method is analyzed with respect to this deficiency and simple modifications are proposed that lead to variates of better quality. It is argued that more randomness (that is, more uniform random numbers) is needed in the transformation process to improve the quality of the numbers generated. However, an implementation of the modified method has still small deviations from the expected behavior and its running time is much higher than that of the original. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[128]
S. Schirra, “Robustness and precision issues in geometric computation,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-004, 1998.
Abstract
This is a preliminary version of a chapter that will appear in the {\em Handbook on Computational Geometry}, edited by J.R.~Sack and J.~Urrutia. We give a survey on techniques that have been proposed and successfully used to attack robustness and precision problems in the implementation of geometric algorithms.
Export
BibTeX
@techreport{Schirra98-1-004, TITLE = {Robustness and precision issues in geometric computation}, AUTHOR = {Schirra, Stefan}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {This is a preliminary version of a chapter that will appear in the {\em Handbook on Computational Geometry}, edited by J.R.~Sack and J.~Urrutia. We give a survey on techniques that have been proposed and successfully used to attack robustness and precision problems in the implementation of geometric algorithms.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Robustness and precision issues in geometric computation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BE8-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 34 p. %X This is a preliminary version of a chapter that will appear in the {\em Handbook on Computational Geometry}, edited by J.R.~Sack and J.~Urrutia. We give a survey on techniques that have been proposed and successfully used to attack robustness and precision problems in the implementation of geometric algorithms. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[129]
S. Schirra, “Parameterized implementations of classical planar convex hull algorithms and extreme point compuations,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-003, 1998.
Abstract
We present C{\tt ++}-implementations of some classical algorithms for computing extreme points of a set of points in two-dimensional space. The template feature of C{\tt ++} is used to provide generic code, that works with various point types and various implementations of the primitives used in the extreme point computation. The parameterization makes the code flexible and adaptable. The code can be used with primitives provided by the CGAL-kernel, primitives provided by LEDA, and others. The interfaces of the convex hull functions are compliant to the Standard Template Library.
Export
BibTeX
@techreport{Schirra1998-1-003, TITLE = {Parameterized implementations of classical planar convex hull algorithms and extreme point compuations}, AUTHOR = {Schirra, Stefan}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We present C{\tt ++}-implementations of some classical algorithms for computing extreme points of a set of points in two-dimensional space. The template feature of C{\tt ++} is used to provide generic code, that works with various point types and various implementations of the primitives used in the extreme point computation. The parameterization makes the code flexible and adaptable. The code can be used with primitives provided by the CGAL-kernel, primitives provided by LEDA, and others. The interfaces of the convex hull functions are compliant to the Standard Template Library.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Parameterized implementations of classical planar convex hull algorithms and extreme point compuations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BEB-2 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 93 p. %X We present C{\tt ++}-implementations of some classical algorithms for computing extreme points of a set of points in two-dimensional space. The template feature of C{\tt ++} is used to provide generic code, that works with various point types and various implementations of the primitives used in the extreme point computation. The parameterization makes the code flexible and adaptable. The code can be used with primitives provided by the CGAL-kernel, primitives provided by LEDA, and others. The interfaces of the convex hull functions are compliant to the Standard Template Library. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[130]
R. Solis-Oba, “2-Approximation algorithm for finding a spanning tree with maximum number of leaves,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1998-1-010, 1998.
Abstract
We study the problem of finding a spanning tree with maximum number of leaves. We present a simple 2-approximation algorithm for the problem, improving on the previous best performance ratio of 3 achieved by algorithms of Ravi and Lu. Our algorithm can be implemented to run in linear time using simple data structures. We also study the variant of the problem in which a given subset of vertices are required to be leaves in the tree. We provide a 5/2-approximation algorithm for this version of the problem
Export
BibTeX
@techreport{Solis-Oba98, TITLE = {2-Approximation algorithm for finding a spanning tree with maximum number of leaves}, AUTHOR = {Solis-Oba, Roberto}, LANGUAGE = {eng}, NUMBER = {MPI-I-1998-1-010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1998}, DATE = {1998}, ABSTRACT = {We study the problem of finding a spanning tree with maximum number of leaves. We present a simple 2-approximation algorithm for the problem, improving on the previous best performance ratio of 3 achieved by algorithms of Ravi and Lu. Our algorithm can be implemented to run in linear time using simple data structures. We also study the variant of the problem in which a given subset of vertices are required to be leaves in the tree. We provide a 5/2-approximation algorithm for this version of the problem}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Solis-Oba, Roberto %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T 2-Approximation algorithm for finding a spanning tree with maximum number of leaves : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-7BD6-0 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1998 %P 16 p. %X We study the problem of finding a spanning tree with maximum number of leaves. We present a simple 2-approximation algorithm for the problem, improving on the previous best performance ratio of 3 achieved by algorithms of Ravi and Lu. Our algorithm can be implemented to run in linear time using simple data structures. We also study the variant of the problem in which a given subset of vertices are required to be leaves in the tree. We provide a 5/2-approximation algorithm for this version of the problem %B Research Report / Max-Planck-Institut f&#252;r Informatik
1997
[131]
S. Albers, “Better bounds for online scheduling,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-009, 1997.
Abstract
We study a classical problem in online scheduling. A sequence of jobs must be scheduled on $m$ identical parallel machines. As each job arrives, its processing time is known. The goal is to minimize the makespan. Bartal, Fiat, Karloff and Vohra gave a deterministic online algorithm that is 1.986-competitive. Karger, Phillips and Torng generalized the algorithm and proved an upper bound of 1.945. The best lower bound currently known on the competitive ratio that can be achieved by deterministic online algorithms it equal to 1.837. In this paper we present an improved deterministic online scheduling algorithm that is 1.923-competitive, for all $m\geq 2$. The algorithm is based on a new scheduling strategy, i.e., it is not a generalization of the approach by Bartal {\it et al}. Also, the algorithm has a simple structure. Furthermore, we develop a better lower bound. We prove that, for general $m$, no deterministic online scheduling algorithm can be better than \mbox{1.852-competitive}.
Export
BibTeX
@techreport{Albers97, TITLE = {Better bounds for online scheduling}, AUTHOR = {Albers, Susanne}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-009}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We study a classical problem in online scheduling. A sequence of jobs must be scheduled on $m$ identical parallel machines. As each job arrives, its processing time is known. The goal is to minimize the makespan. Bartal, Fiat, Karloff and Vohra gave a deterministic online algorithm that is 1.986-competitive. Karger, Phillips and Torng generalized the algorithm and proved an upper bound of 1.945. The best lower bound currently known on the competitive ratio that can be achieved by deterministic online algorithms it equal to 1.837. In this paper we present an improved deterministic online scheduling algorithm that is 1.923-competitive, for all $m\geq 2$. The algorithm is based on a new scheduling strategy, i.e., it is not a generalization of the approach by Bartal {\it et al}. Also, the algorithm has a simple structure. Furthermore, we develop a better lower bound. We prove that, for general $m$, no deterministic online scheduling algorithm can be better than \mbox{1.852-competitive}.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Albers, Susanne %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Better bounds for online scheduling : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9E1F-1 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 16 p. %X We study a classical problem in online scheduling. A sequence of jobs must be scheduled on $m$ identical parallel machines. As each job arrives, its processing time is known. The goal is to minimize the makespan. Bartal, Fiat, Karloff and Vohra gave a deterministic online algorithm that is 1.986-competitive. Karger, Phillips and Torng generalized the algorithm and proved an upper bound of 1.945. The best lower bound currently known on the competitive ratio that can be achieved by deterministic online algorithms it equal to 1.837. In this paper we present an improved deterministic online scheduling algorithm that is 1.923-competitive, for all $m\geq 2$. The algorithm is based on a new scheduling strategy, i.e., it is not a generalization of the approach by Bartal {\it et al}. Also, the algorithm has a simple structure. Furthermore, we develop a better lower bound. We prove that, for general $m$, no deterministic online scheduling algorithm can be better than \mbox{1.852-competitive}. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[132]
S. Albers and M. R. Henzinger, “Exploring unknown environments,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-017, 1997.
Abstract
We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number $R$ of edge traversals. Koutsoupias~\cite{K} gave a lower bound for $R$ of $\Omega(d^2 m)$, and Deng and Papadimitriou~\cite{DP} showed an upper bound of $d^{O(d)} m$, where $m$ is the number edges in the graph and $d$ is the minimum number of edges that have to be added to make the graph Eulerian. We give the first sub-exponential algorithm for this exploration problem, which achieves an upper bound of $d^{O(\log d)} m$. We also show a matching lower bound of $d^{\Omega(\log d)}m$ for our algorithm. Additionally, we give lower bounds of $2^{\Omega(d)}m$, resp.\ $d^{\Omega(\log d)}m$ for various other natural exploration algorithms.
Export
BibTeX
@techreport{AlbersHenzinger97, TITLE = {Exploring unknown environments}, AUTHOR = {Albers, Susanne and Henzinger, Monika R.}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-017}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number $R$ of edge traversals. Koutsoupias~\cite{K} gave a lower bound for $R$ of $\Omega(d^2 m)$, and Deng and Papadimitriou~\cite{DP} showed an upper bound of $d^{O(d)} m$, where $m$ is the number edges in the graph and $d$ is the minimum number of edges that have to be added to make the graph Eulerian. We give the first sub-exponential algorithm for this exploration problem, which achieves an upper bound of $d^{O(\log d)} m$. We also show a matching lower bound of $d^{\Omega(\log d)}m$ for our algorithm. Additionally, we give lower bounds of $2^{\Omega(d)}m$, resp.\ $d^{\Omega(\log d)}m$ for various other natural exploration algorithms.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Albers, Susanne %A Henzinger, Monika R. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Exploring unknown environments : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D82-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 23 p. %X We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number $R$ of edge traversals. Koutsoupias~\cite{K} gave a lower bound for $R$ of $\Omega(d^2 m)$, and Deng and Papadimitriou~\cite{DP} showed an upper bound of $d^{O(d)} m$, where $m$ is the number edges in the graph and $d$ is the minimum number of edges that have to be added to make the graph Eulerian. We give the first sub-exponential algorithm for this exploration problem, which achieves an upper bound of $d^{O(\log d)} m$. We also show a matching lower bound of $d^{\Omega(\log d)}m$ for our algorithm. Additionally, we give lower bounds of $2^{\Omega(d)}m$, resp.\ $d^{\Omega(\log d)}m$ for various other natural exploration algorithms. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[133]
D. Alberts, C. Gutwenger, P. Mutzel, and S. Näher, “AGD-Library: A Library of Algorithms for Graph Drawing,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-019, 1997.
Abstract
A graph drawing algorithm produces a layout of a graph in two- or three-dimensional space that should be readable and easy to understand. Since the aesthetic criteria differ from one application area to another, it is unlikely that a definition of the ``optimal drawing'' of a graph in a strict mathematical sense exists. A large number of graph drawing algorithms taking different aesthetic criteria into account have already been proposed. In this paper we describe the design and implementation of the AGD--Library, a library of {\bf A}lgorithms for {\bf G}raph {\bf D}rawing. The library offers a broad range of existing algorithms for two-dimensional graph drawing and tools for implementing new algorithms. The library is written in \CC using the LEDA platform for combinatorial and geometric computing (\cite{Mehlhorn-Naeher:CACM,LEDA-Manual}). The algorithms are implemented independently of the underlying visualization or graphics system by using a generic layout interface. Most graph drawing algorithms place a set of restrictions on the input graphs like planarity or biconnectivity. We provide a mechanism for declaring this precondition for a particular algorithm and checking it for potential input graphs. A drawing model can be characterized by a set of properties of the drawing. We call these properties the postcondition of the algorithm. There is support for maintaining and retrieving the postcondition of an algorithm.
Export
BibTeX
@techreport{AlbertsGutwengerMutzelNaher, TITLE = {{AGD}-Library: A Library of Algorithms for Graph Drawing}, AUTHOR = {Alberts, David and Gutwenger, Carsten and Mutzel, Petra and N{\"a}her, Stefan}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-019}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {A graph drawing algorithm produces a layout of a graph in two- or three-dimensional space that should be readable and easy to understand. Since the aesthetic criteria differ from one application area to another, it is unlikely that a definition of the ``optimal drawing'' of a graph in a strict mathematical sense exists. A large number of graph drawing algorithms taking different aesthetic criteria into account have already been proposed. In this paper we describe the design and implementation of the AGD--Library, a library of {\bf A}lgorithms for {\bf G}raph {\bf D}rawing. The library offers a broad range of existing algorithms for two-dimensional graph drawing and tools for implementing new algorithms. The library is written in \CC using the LEDA platform for combinatorial and geometric computing (\cite{Mehlhorn-Naeher:CACM,LEDA-Manual}). The algorithms are implemented independently of the underlying visualization or graphics system by using a generic layout interface. Most graph drawing algorithms place a set of restrictions on the input graphs like planarity or biconnectivity. We provide a mechanism for declaring this precondition for a particular algorithm and checking it for potential input graphs. A drawing model can be characterized by a set of properties of the drawing. We call these properties the postcondition of the algorithm. There is support for maintaining and retrieving the postcondition of an algorithm.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Alberts, David %A Gutwenger, Carsten %A Mutzel, Petra %A N&#228;her, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T AGD-Library: A Library of Algorithms for Graph Drawing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D7C-6 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 13 p. %X A graph drawing algorithm produces a layout of a graph in two- or three-dimensional space that should be readable and easy to understand. Since the aesthetic criteria differ from one application area to another, it is unlikely that a definition of the ``optimal drawing'' of a graph in a strict mathematical sense exists. A large number of graph drawing algorithms taking different aesthetic criteria into account have already been proposed. In this paper we describe the design and implementation of the AGD--Library, a library of {\bf A}lgorithms for {\bf G}raph {\bf D}rawing. The library offers a broad range of existing algorithms for two-dimensional graph drawing and tools for implementing new algorithms. The library is written in \CC using the LEDA platform for combinatorial and geometric computing (\cite{Mehlhorn-Naeher:CACM,LEDA-Manual}). The algorithms are implemented independently of the underlying visualization or graphics system by using a generic layout interface. Most graph drawing algorithms place a set of restrictions on the input graphs like planarity or biconnectivity. We provide a mechanism for declaring this precondition for a particular algorithm and checking it for potential input graphs. A drawing model can be characterized by a set of properties of the drawing. We call these properties the postcondition of the algorithm. There is support for maintaining and retrieving the postcondition of an algorithm. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[134]
E. Althaus and K. Mehlhorn, “Maximum network flow with floating point arithmetic,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-022, 1997.
Abstract
We discuss the implementation of network flow algorithms in floating point arithmetic. We give an example to illustrate the difficulties that may arise when floating point arithmetic is used without care. We describe an iterative improvement scheme that can be put around any network flow algorithm for integer capacities. The scheme carefully scales the capacities such that all integers arising can be handled exactly using floating point arithmetic. For $m \le 10^9$ and double precision floating point arithmetic the number of iterations is always bounded by three and the relative error in the flow value is at most $2^{-19}$. For $m \le 10^6$ and double precision arithmetic the relative error after the first iteration is bounded by $10^{-3}$.
Export
BibTeX
@techreport{AlthausMehlhorn97, TITLE = {Maximum network flow with floating point arithmetic}, AUTHOR = {Althaus, Ernst and Mehlhorn, Kurt}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-022}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We discuss the implementation of network flow algorithms in floating point arithmetic. We give an example to illustrate the difficulties that may arise when floating point arithmetic is used without care. We describe an iterative improvement scheme that can be put around any network flow algorithm for integer capacities. The scheme carefully scales the capacities such that all integers arising can be handled exactly using floating point arithmetic. For $m \le 10^9$ and double precision floating point arithmetic the number of iterations is always bounded by three and the relative error in the flow value is at most $2^{-19}$. For $m \le 10^6$ and double precision arithmetic the relative error after the first iteration is bounded by $10^{-3}$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Althaus, Ernst %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Maximum network flow with floating point arithmetic : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D72-9 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 5 p. %X We discuss the implementation of network flow algorithms in floating point arithmetic. We give an example to illustrate the difficulties that may arise when floating point arithmetic is used without care. We describe an iterative improvement scheme that can be put around any network flow algorithm for integer capacities. The scheme carefully scales the capacities such that all integers arising can be handled exactly using floating point arithmetic. For $m \le 10^9$ and double precision floating point arithmetic the number of iterations is always bounded by three and the relative error in the flow value is at most $2^{-19}$. For $m \le 10^6$ and double precision arithmetic the relative error after the first iteration is bounded by $10^{-3}$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[135]
F. J. Brandenburg, M. Jünger, and P. Mutzel, “Algorithmen zum automatischen Zeichnen von Graphen,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-007, 1997.
Abstract
Das Zeichnen von Graphen ist ein junges aufblühendes Gebiet der Informatik. Es befasst sich mit Entwurf, Analyse, Implementierung und Evaluierung von neuen Algorithmen für ästhetisch schöne Zeichnungen von Graphen. Anhand von selektierten Anwendungsbeispielen, Problemstellungen und Lösungsansätzen wollen wir in dieses noch relativ unbekannte Gebiet einführen und gleichzeitig einen Überblick über die Aktivitäten und Ziele einer von der DFG im Rahmen des Schwerpunktprogramms "`Effiziente Algorithmen für Diskrete Probleme und ihre Anwendungen"' geförderten Arbeitsgruppe aus Mitgliedern der Universitäten Halle, Köln und Passau und des Max-Planck-Instituts für Informatik in Saarbrücken geben.
Export
BibTeX
@techreport{BrandenburgJuengerMutzel97, TITLE = {{Algorithmen zum automatischen Zeichnen von Graphen}}, AUTHOR = {Brandenburg, Franz J. and J{\"u}nger, Michael and Mutzel, Petra}, LANGUAGE = {deu}, NUMBER = {MPI-I-1997-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {Das Zeichnen von Graphen ist ein junges aufbl{\"u}hendes Gebiet der Informatik. Es befasst sich mit Entwurf, Analyse, Implementierung und Evaluierung von neuen Algorithmen f{\"u}r {\"a}sthetisch sch{\"o}ne Zeichnungen von Graphen. Anhand von selektierten Anwendungsbeispielen, Problemstellungen und L{\"o}sungsans{\"a}tzen wollen wir in dieses noch relativ unbekannte Gebiet einf{\"u}hren und gleichzeitig einen {\"U}berblick {\"u}ber die Aktivit{\"a}ten und Ziele einer von der DFG im Rahmen des Schwerpunktprogramms "`Effiziente Algorithmen f{\"u}r Diskrete Probleme und ihre Anwendungen"' gef{\"o}rderten Arbeitsgruppe aus Mitgliedern der Universit{\"a}ten Halle, K{\"o}ln und Passau und des Max-Planck-Instituts f{\"u}r Informatik in Saarbr{\"u}cken geben.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Brandenburg, Franz J. %A J&#252;nger, Michael %A Mutzel, Petra %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Algorithmen zum automatischen Zeichnen von Graphen : %G deu %U http://hdl.handle.net/11858/00-001M-0000-0014-9F6D-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 9 S. %X Das Zeichnen von Graphen ist ein junges aufbl&#252;hendes Gebiet der Informatik. Es befasst sich mit Entwurf, Analyse, Implementierung und Evaluierung von neuen Algorithmen f&#252;r &#228;sthetisch sch&#246;ne Zeichnungen von Graphen. Anhand von selektierten Anwendungsbeispielen, Problemstellungen und L&#246;sungsans&#228;tzen wollen wir in dieses noch relativ unbekannte Gebiet einf&#252;hren und gleichzeitig einen &#220;berblick &#252;ber die Aktivit&#228;ten und Ziele einer von der DFG im Rahmen des Schwerpunktprogramms "`Effiziente Algorithmen f&#252;r Diskrete Probleme und ihre Anwendungen"' gef&#246;rderten Arbeitsgruppe aus Mitgliedern der Universit&#228;ten Halle, K&#246;ln und Passau und des Max-Planck-Instituts f&#252;r Informatik in Saarbr&#252;cken geben. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[136]
G. S. Brodal, J. L. Träff, and C. Zaroliagis, “A parallel priority queue with constant time operations,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-011, 1997.
Abstract
We present a parallel priority queue that supports the following operations in constant time: {\em parallel insertion\/} of a sequence of elements ordered according to key, {\em parallel decrease key\/} for a sequence of elements ordered according to key, {\em deletion of the minimum key element}, as well as {\em deletion of an arbitrary element}. Our data structure is the first to support multi insertion and multi decrease key in constant time. The priority queue can be implemented on the EREW PRAM, and can perform any sequence of $n$ operations in $O(n)$ time and $O(m\log n)$ work, $m$ being the total number of keys inserted and/or updated. A main application is a parallel implementation of Dijkstra's algorithm for the single-source shortest path problem, which runs in $O(n)$ time and $O(m\log n)$ work on a CREW PRAM on graphs with $n$ vertices and $m$ edges. This is a logarithmic factor improvement in the running time compared with previous approaches.
Export
BibTeX
@techreport{BrodalTraffZaroliagis97, TITLE = {A parallel priority queue with constant time operations}, AUTHOR = {Brodal, Gerth St{\o}lting and Tr{\"a}ff, Jesper Larsson and Zaroliagis, Christos}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-011}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We present a parallel priority queue that supports the following operations in constant time: {\em parallel insertion\/} of a sequence of elements ordered according to key, {\em parallel decrease key\/} for a sequence of elements ordered according to key, {\em deletion of the minimum key element}, as well as {\em deletion of an arbitrary element}. Our data structure is the first to support multi insertion and multi decrease key in constant time. The priority queue can be implemented on the EREW PRAM, and can perform any sequence of $n$ operations in $O(n)$ time and $O(m\log n)$ work, $m$ being the total number of keys inserted and/or updated. A main application is a parallel implementation of Dijkstra's algorithm for the single-source shortest path problem, which runs in $O(n)$ time and $O(m\log n)$ work on a CREW PRAM on graphs with $n$ vertices and $m$ edges. This is a logarithmic factor improvement in the running time compared with previous approaches.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Brodal, Gerth St&#248;lting %A Tr&#228;ff, Jesper Larsson %A Zaroliagis, Christos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A parallel priority queue with constant time operations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9E19-D %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 19 p. %X We present a parallel priority queue that supports the following operations in constant time: {\em parallel insertion\/} of a sequence of elements ordered according to key, {\em parallel decrease key\/} for a sequence of elements ordered according to key, {\em deletion of the minimum key element}, as well as {\em deletion of an arbitrary element}. Our data structure is the first to support multi insertion and multi decrease key in constant time. The priority queue can be implemented on the EREW PRAM, and can perform any sequence of $n$ operations in $O(n)$ time and $O(m\log n)$ work, $m$ being the total number of keys inserted and/or updated. A main application is a parallel implementation of Dijkstra's algorithm for the single-source shortest path problem, which runs in $O(n)$ time and $O(m\log n)$ work on a CREW PRAM on graphs with $n$ vertices and $m$ edges. This is a logarithmic factor improvement in the running time compared with previous approaches. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[137]
G. S. Brodal, “Finger search trees with constant insertion time,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-020, 1997.
Abstract
We consider the problem of implementing finger search trees on the pointer machine, {\it i.e.}, how to maintain a sorted list such that searching for an element $x$, starting the search at any arbitrary element $f$ in the list, only requires logarithmic time in the distance between $x$ and $f$ in the list. We present the first pointer-based implementation of finger search trees allowing new elements to be inserted at any arbitrary position in the list in worst case constant time. Previously, the best known insertion time on the pointer machine was $O(\log^* n)$, where $n$ is the total length of the list. On a unit-cost RAM, a constant insertion time has been achieved by Dietz and Raman by using standard techniques of packing small problem sizes into a constant number of machine words. Deletion of a list element is supported in $O(\log^* n)$ time, which matches the previous best bounds. Our data structure requires linear space.
Export
BibTeX
@techreport{Brodal97, TITLE = {Finger search trees with constant insertion time}, AUTHOR = {Brodal, Gerth St{\o}lting}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-020}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We consider the problem of implementing finger search trees on the pointer machine, {\it i.e.}, how to maintain a sorted list such that searching for an element $x$, starting the search at any arbitrary element $f$ in the list, only requires logarithmic time in the distance between $x$ and $f$ in the list. We present the first pointer-based implementation of finger search trees allowing new elements to be inserted at any arbitrary position in the list in worst case constant time. Previously, the best known insertion time on the pointer machine was $O(\log^* n)$, where $n$ is the total length of the list. On a unit-cost RAM, a constant insertion time has been achieved by Dietz and Raman by using standard techniques of packing small problem sizes into a constant number of machine words. Deletion of a list element is supported in $O(\log^* n)$ time, which matches the previous best bounds. Our data structure requires linear space.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Brodal, Gerth St&#248;lting %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Finger search trees with constant insertion time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D79-C %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 17 p. %X We consider the problem of implementing finger search trees on the pointer machine, {\it i.e.}, how to maintain a sorted list such that searching for an element $x$, starting the search at any arbitrary element $f$ in the list, only requires logarithmic time in the distance between $x$ and $f$ in the list. We present the first pointer-based implementation of finger search trees allowing new elements to be inserted at any arbitrary position in the list in worst case constant time. Previously, the best known insertion time on the pointer machine was $O(\log^* n)$, where $n$ is the total length of the list. On a unit-cost RAM, a constant insertion time has been achieved by Dietz and Raman by using standard techniques of packing small problem sizes into a constant number of machine words. Deletion of a list element is supported in $O(\log^* n)$ time, which matches the previous best bounds. Our data structure requires linear space. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[138]
W. H. Cunningham and Y. Wang, “Restricted 2-factor polytopes,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-006, 1997.
Abstract
The optimal $k$-restricted 2-factor problem consists of finding, in a complete undirected graph $K_n$, a minimum cost 2-factor (subgraph having degree 2 at every node) with all components having more than $k$ nodes. The problem is a relaxation of the well-known symmetric travelling salesman problem, and is equivalent to it when $\frac{n}{2}\leq k\leq n-1$. We study the $k$-restricted 2-factor polytope. We present a large class of valid inequalities, called bipartition inequalities, and describe some of their properties; some of these results are new even for the travelling salesman polytope. For the case $k=3$, the triangle-free 2-factor polytope, we derive a necessary and sufficient condition for such inequalities to be facet inducing.
Export
BibTeX
@techreport{CunninghamWang97, TITLE = {Restricted 2-factor polytopes}, AUTHOR = {Cunningham, Wiliam H. and Wang, Yaoguang}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {The optimal $k$-restricted 2-factor problem consists of finding, in a complete undirected graph $K_n$, a minimum cost 2-factor (subgraph having degree 2 at every node) with all components having more than $k$ nodes. The problem is a relaxation of the well-known symmetric travelling salesman problem, and is equivalent to it when $\frac{n}{2}\leq k\leq n-1$. We study the $k$-restricted 2-factor polytope. We present a large class of valid inequalities, called bipartition inequalities, and describe some of their properties; some of these results are new even for the travelling salesman polytope. For the case $k=3$, the triangle-free 2-factor polytope, we derive a necessary and sufficient condition for such inequalities to be facet inducing.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Cunningham, Wiliam H. %A Wang, Yaoguang %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Restricted 2-factor polytopes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9F73-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 30 p. %X The optimal $k$-restricted 2-factor problem consists of finding, in a complete undirected graph $K_n$, a minimum cost 2-factor (subgraph having degree 2 at every node) with all components having more than $k$ nodes. The problem is a relaxation of the well-known symmetric travelling salesman problem, and is equivalent to it when $\frac{n}{2}\leq k\leq n-1$. We study the $k$-restricted 2-factor polytope. We present a large class of valid inequalities, called bipartition inequalities, and describe some of their properties; some of these results are new even for the travelling salesman polytope. For the case $k=3$, the triangle-free 2-factor polytope, we derive a necessary and sufficient condition for such inequalities to be facet inducing. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[139]
A. Fiat and S. Leonardi, “On-line network routing - a survey,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-97-1-026, 1997.
Export
BibTeX
@techreport{fiatLeonardi97, TITLE = {On-line network routing -- a survey}, AUTHOR = {Fiat, Amos and Leonardi, Stefano}, LANGUAGE = {eng}, NUMBER = {MPI-I-97-1-026}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Fiat, Amos %A Leonardi, Stefano %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On-line network routing - a survey : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9CD2-A %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 19 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[140]
R. Fleischer, “On the Bahncard problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-018, 1997.
Abstract
In this paper, we generalize the {\em Ski-Rental Problem} to the {\em Bahncardproblem} which is an online problem of practical relevance for all travelers. The Bahncard is a railway pass of the Deutsche Bundesbahn (the German railway company) which entitles its holder to a 50\%\ price reduction on nearly all train tickets. It costs 240\thinspace DM, and it is valid for 12 months. For the common traveler, the decision at which time to buy a Bahncard is a typical online problem, because she usually does not know when and where to she will travel next. We show that the greedy algorithm applied by most travelers and clerks at ticket offices is not better in the worst case than the trivial algorithm which never buys a Bahncard. We present two optimal deterministic online algorithms, an optimistic one and and a pessimistic one. We further give a lower bound for randomized online algorithms and present an algorithm which we conjecture to be optimal; a proof of the conjecture is given for a special case of the problem. It turns out that the optimal competitive ratio only depends on the price reduction factor (50\%\ for the German Bahncardproblem), but does not depend on the price or validity period of a Bahncard.
Export
BibTeX
@techreport{Fleischer97, TITLE = {On the Bahncard problem}, AUTHOR = {Fleischer, Rudolf}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-018}, NUMBER = {MPI-I-1997-1-018}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {In this paper, we generalize the {\em Ski-Rental Problem} to the {\em Bahncardproblem} which is an online problem of practical relevance for all travelers. The Bahncard is a railway pass of the Deutsche Bundesbahn (the German railway company) which entitles its holder to a 50\%\ price reduction on nearly all train tickets. It costs 240\thinspace DM, and it is valid for 12 months. For the common traveler, the decision at which time to buy a Bahncard is a typical online problem, because she usually does not know when and where to she will travel next. We show that the greedy algorithm applied by most travelers and clerks at ticket offices is not better in the worst case than the trivial algorithm which never buys a Bahncard. We present two optimal deterministic online algorithms, an optimistic one and and a pessimistic one. We further give a lower bound for randomized online algorithms and present an algorithm which we conjecture to be optimal; a proof of the conjecture is given for a special case of the problem. It turns out that the optimal competitive ratio only depends on the price reduction factor (50\%\ for the German Bahncardproblem), but does not depend on the price or validity period of a Bahncard.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Fleischer, Rudolf %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Bahncard problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D7F-F %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-018 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 16 p. %X In this paper, we generalize the {\em Ski-Rental Problem} to the {\em Bahncardproblem} which is an online problem of practical relevance for all travelers. The Bahncard is a railway pass of the Deutsche Bundesbahn (the German railway company) which entitles its holder to a 50\%\ price reduction on nearly all train tickets. It costs 240\thinspace DM, and it is valid for 12 months. For the common traveler, the decision at which time to buy a Bahncard is a typical online problem, because she usually does not know when and where to she will travel next. We show that the greedy algorithm applied by most travelers and clerks at ticket offices is not better in the worst case than the trivial algorithm which never buys a Bahncard. We present two optimal deterministic online algorithms, an optimistic one and and a pessimistic one. We further give a lower bound for randomized online algorithms and present an algorithm which we conjecture to be optimal; a proof of the conjecture is given for a special case of the problem. It turns out that the optimal competitive ratio only depends on the price reduction factor (50\%\ for the German Bahncardproblem), but does not depend on the price or validity period of a Bahncard. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[141]
N. Garg and J. Könemann, “Faster and simpler algorithms for multicommodity flow and other fractional packing problems,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-97-1-025, 1997.
Abstract
This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems. We provide a different approach to these problems which yields faster and much simpler algorithms. In particular we provide the first polynomial-time, combinatorial approximation algorithm for the fractional packing problem; in fact the running time of our algorithm is strongly polynomial. Our approach also allows us to substitute shortest path computations for min-cost flow computations in computing maximum concurrent flow and min-cost multicommodity flow; this yields much faster algorithms when the number of commodities is large.
Export
BibTeX
@techreport{GargKoenemann97, TITLE = {Faster and simpler algorithms for multicommodity flow and other fractional packing problems}, AUTHOR = {Garg, Naveen and K{\"o}nemann, Jochen}, LANGUAGE = {eng}, NUMBER = {MPI-I-97-1-025}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems. We provide a different approach to these problems which yields faster and much simpler algorithms. In particular we provide the first polynomial-time, combinatorial approximation algorithm for the fractional packing problem; in fact the running time of our algorithm is strongly polynomial. Our approach also allows us to substitute shortest path computations for min-cost flow computations in computing maximum concurrent flow and min-cost multicommodity flow; this yields much faster algorithms when the number of commodities is large.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Garg, Naveen %A K&#246;nemann, Jochen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Faster and simpler algorithms for multicommodity flow and other fractional packing problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9CD9-B %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 13 p. %X This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems. We provide a different approach to these problems which yields faster and much simpler algorithms. In particular we provide the first polynomial-time, combinatorial approximation algorithm for the fractional packing problem; in fact the running time of our algorithm is strongly polynomial. Our approach also allows us to substitute shortest path computations for min-cost flow computations in computing maximum concurrent flow and min-cost multicommodity flow; this yields much faster algorithms when the number of commodities is large. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[142]
N. Garg, G. Konjevod, and R. Ravi, “A polylogarithmic approximation algorithm for group Steiner tree problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-97-1-027, 1997.
Abstract
The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and finds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized $O(\log^3 n \log k)$-approximation algorithm for the group Steiner tree problem on an $n$-node graph, where $k$ is the number of groups.The best previous performance guarantee was $(1+\frac{\ln k}{2})\sqrt{k}$ (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Ravi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slav{\'\i}k on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics to reduce the problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to $O(\log^2 n \log k)$ in the case of graphs that exclude small minors by using a better alternative to Bartal's result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) -- this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case.
Export
BibTeX
@techreport{GargKonjevodRavi97, TITLE = {A polylogarithmic approximation algorithm for group Steiner tree problem}, AUTHOR = {Garg, Naveen and Konjevod, Goran and Ravi, R.}, LANGUAGE = {eng}, NUMBER = {MPI-I-97-1-027}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and finds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized $O(\log^3 n \log k)$-approximation algorithm for the group Steiner tree problem on an $n$-node graph, where $k$ is the number of groups.The best previous performance guarantee was $(1+\frac{\ln k}{2})\sqrt{k}$ (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Ravi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slav{\'\i}k on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics to reduce the problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to $O(\log^2 n \log k)$ in the case of graphs that exclude small minors by using a better alternative to Bartal's result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) -- this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Garg, Naveen %A Konjevod, Goran %A Ravi, R. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A polylogarithmic approximation algorithm for group Steiner tree problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9CCF-3 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 7 p. %X The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and finds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized $O(\log^3 n \log k)$-approximation algorithm for the group Steiner tree problem on an $n$-node graph, where $k$ is the number of groups.The best previous performance guarantee was $(1+\frac{\ln k}{2})\sqrt{k}$ (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Ravi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slav{\'\i}k on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics to reduce the problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to $O(\log^2 n \log k)$ in the case of graphs that exclude small minors by using a better alternative to Bartal's result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) -- this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[143]
N. Garg and C. Manss, “Evaluating a 2-approximation algorithm for edge-separators in planar graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-010, 1997.
Abstract
In this paper we report on results obtained by an implementation of a 2-approximation algorithm for edge separators in planar graphs. For 374 out of the 435 instances the algorithm returned the optimum solution. For the remaining instances the solution returned was never more than 10.6\% away from the lower bound on the optimum separator. We also improve the worst-case running time of the algorithm from $O(n^6)$ to $O(n^5)$ and present techniques which improve the running time significantly in practice.
Export
BibTeX
@techreport{GargManss97, TITLE = {Evaluating a 2-approximation algorithm for edge-separators in planar graphs}, AUTHOR = {Garg, Naveen and Manss, Christian}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {In this paper we report on results obtained by an implementation of a 2-approximation algorithm for edge separators in planar graphs. For 374 out of the 435 instances the algorithm returned the optimum solution. For the remaining instances the solution returned was never more than 10.6\% away from the lower bound on the optimum separator. We also improve the worst-case running time of the algorithm from $O(n^6)$ to $O(n^5)$ and present techniques which improve the running time significantly in practice.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Garg, Naveen %A Manss, Christian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Evaluating a 2-approximation algorithm for edge-separators in planar graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9E1C-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 9 p. %X In this paper we report on results obtained by an implementation of a 2-approximation algorithm for edge separators in planar graphs. For 374 out of the 435 instances the algorithm returned the optimum solution. For the remaining instances the solution returned was never more than 10.6\% away from the lower bound on the optimum separator. We also improve the worst-case running time of the algorithm from $O(n^6)$ to $O(n^5)$ and present techniques which improve the running time significantly in practice. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[144]
N. Garg, “Approximating sparsest cuts,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-002, 1997.
Export
BibTeX
@techreport{Garg97, TITLE = {Approximating sparsest cuts}, AUTHOR = {Garg, Naveen}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Garg, Naveen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximating sparsest cuts : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9FD3-1 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 9 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[145]
N. Garg, S. Albers, and S. Leonardi, “Minimizing stall time in single and parallel disk systems,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-024, 1997.
Abstract
We study integrated prefetching and caching problems following the work of Cao et al. and Kimbrel and Karlin. Cao et al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of the processor stall times and the length of the request sequence to be served. We show that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin. For the parallel disk problem we give an approximation algorithm for minimizing stall time. Stall time is a more realistic and harder to approximate measure for this problem. All of our algorithms are based on a new approach which involves formulating the prefetching/caching problems as integer programs.
Export
BibTeX
@techreport{AlbersGargLeonardi97, TITLE = {Minimizing stall time in single and parallel disk systems}, AUTHOR = {Garg, Naveen and Albers, Susanne and Leonardi, Stefano}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-024}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We study integrated prefetching and caching problems following the work of Cao et al. and Kimbrel and Karlin. Cao et al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of the processor stall times and the length of the request sequence to be served. We show that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin. For the parallel disk problem we give an approximation algorithm for minimizing stall time. Stall time is a more realistic and harder to approximate measure for this problem. All of our algorithms are based on a new approach which involves formulating the prefetching/caching problems as integer programs.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Garg, Naveen %A Albers, Susanne %A Leonardi, Stefano %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Minimizing stall time in single and parallel disk systems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D69-1 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 16 p. %X We study integrated prefetching and caching problems following the work of Cao et al. and Kimbrel and Karlin. Cao et al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of the processor stall times and the length of the request sequence to be served. We show that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin. For the parallel disk problem we give an approximation algorithm for minimizing stall time. Stall time is a more realistic and harder to approximate measure for this problem. All of our algorithms are based on a new approach which involves formulating the prefetching/caching problems as integer programs. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[146]
B. Jung, H.-P. Lenhof, P. Müller, and C. Rüb, “Parallel algorithms for MD-simulations of synthetic polymers,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-003, 1997.
Abstract
Molecular dynamics simulation has become an important tool for testing and developing hypotheses about chemical and physical processes. Since the required amount of computing power is tremendous there is a strong interest in parallel algorithms. We deal with efficient algorithms on MIMD computers for a special class of macromolecules, namely synthetic polymers, which play a very important role in industry. This makes it worthwhile to design fast parallel algorithms specifically for them. Contrary to existing parallel algorithms, our algorithms take the structure of synthetic polymers into account which allows faster simulation of their dynamics.
Export
BibTeX
@techreport{JungLenhofMullerRub97, TITLE = {Parallel algorithms for {MD}-simulations of synthetic polymers}, AUTHOR = {Jung, Bernd and Lenhof, Hans-Peter and M{\"u}ller, Peter and R{\"u}b, Christine}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {Molecular dynamics simulation has become an important tool for testing and developing hypotheses about chemical and physical processes. Since the required amount of computing power is tremendous there is a strong interest in parallel algorithms. We deal with efficient algorithms on MIMD computers for a special class of macromolecules, namely synthetic polymers, which play a very important role in industry. This makes it worthwhile to design fast parallel algorithms specifically for them. Contrary to existing parallel algorithms, our algorithms take the structure of synthetic polymers into account which allows faster simulation of their dynamics.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Jung, Bernd %A Lenhof, Hans-Peter %A M&#252;ller, Peter %A R&#252;b, Christine %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Parallel algorithms for MD-simulations of synthetic polymers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9FD0-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 32 p. %X Molecular dynamics simulation has become an important tool for testing and developing hypotheses about chemical and physical processes. Since the required amount of computing power is tremendous there is a strong interest in parallel algorithms. We deal with efficient algorithms on MIMD computers for a special class of macromolecules, namely synthetic polymers, which play a very important role in industry. This makes it worthwhile to design fast parallel algorithms specifically for them. Contrary to existing parallel algorithms, our algorithms take the structure of synthetic polymers into account which allows faster simulation of their dynamics. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[147]
M. Jünger, S. Leipert, and P. Mutzel, “Pitfalls of using PQ-Trees in automatic graph drawing,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-97-1-015, 1997.
Abstract
A number of erroneous attempts involving $PQ$-trees in the context of automatic graph drawing algorithms have been presented in the literature in recent years. In order to prevent future research from constructing algorithms with similar errors we point out some of the major mistakes. In particular, we examine erroneous usage of the $PQ$-tree data structure in algorithms for computing maximal planar subgraphs and an algorithm for testing leveled planarity of leveled directed acyclic graphs with several sources and sinks.
Export
BibTeX
@techreport{JungerLeipertMutzel97, TITLE = {Pitfalls of using {PQ}-Trees in automatic graph drawing}, AUTHOR = {J{\"u}nger, Michael and Leipert, Sebastian and Mutzel, Petra}, LANGUAGE = {eng}, NUMBER = {MPI-I-97-1-015}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {A number of erroneous attempts involving $PQ$-trees in the context of automatic graph drawing algorithms have been presented in the literature in recent years. In order to prevent future research from constructing algorithms with similar errors we point out some of the major mistakes. In particular, we examine erroneous usage of the $PQ$-tree data structure in algorithms for computing maximal planar subgraphs and an algorithm for testing leveled planarity of leveled directed acyclic graphs with several sources and sinks.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A J&#252;nger, Michael %A Leipert, Sebastian %A Mutzel, Petra %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Pitfalls of using PQ-Trees in automatic graph drawing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9E13-A %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 12 p. %X A number of erroneous attempts involving $PQ$-trees in the context of automatic graph drawing algorithms have been presented in the literature in recent years. In order to prevent future research from constructing algorithms with similar errors we point out some of the major mistakes. In particular, we examine erroneous usage of the $PQ$-tree data structure in algorithms for computing maximal planar subgraphs and an algorithm for testing leveled planarity of leveled directed acyclic graphs with several sources and sinks. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[148]
H.-P. Lenhof, “New contact measures for the protein docking problem,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-97-1-004, 1997.
Abstract
We have developed and implemented a parallel distributed algorithm for the rigid-body protein docking problem. The algorithm is based on a new fitness function for evaluating the surface matching of a given conformation. The fitness function is defined as the weighted sum of two contact measures, the {\em geometric contact measure} and the {\em chemical contact measure}. The geometric contact measure measures the ``size'' of the contact area of two molecules. It is a potential function that counts the ``van der Waals contacts'' between the atoms of the two molecules (the algorithm does not compute the Lennard-Jones potential). The chemical contact measure is also based on the ``van der Waals contacts'' principle: We consider all atom pairs that have a ``van der Waals'' contact, but instead of adding a constant for each pair $(a,b)$ we add a ``chemical weight'' that depends on the atom pair $(a,b)$. We tested our docking algorithm with a test set that contains the test examples of Norel et al.~\cite{NLWN94} and \protect{Fischer} et al.~\cite{FLWN95} and compared the results of our docking algorithm with the results of Norel et al.~\cite{NLWN94,NLWN95}, with the results of Fischer et al.~\cite{FLWN95} and with the results of Meyer et al.~\cite{MWS96}. In 32 of 35 test examples the best conformation with respect to the fitness function was an approximation of the real conformation.
Export
BibTeX
@techreport{Lenhof97, TITLE = {New contact measures for the protein docking problem}, AUTHOR = {Lenhof, Hans-Peter}, LANGUAGE = {eng}, NUMBER = {MPI-I-97-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We have developed and implemented a parallel distributed algorithm for the rigid-body protein docking problem. The algorithm is based on a new fitness function for evaluating the surface matching of a given conformation. The fitness function is defined as the weighted sum of two contact measures, the {\em geometric contact measure} and the {\em chemical contact measure}. The geometric contact measure measures the ``size'' of the contact area of two molecules. It is a potential function that counts the ``van der Waals contacts'' between the atoms of the two molecules (the algorithm does not compute the Lennard-Jones potential). The chemical contact measure is also based on the ``van der Waals contacts'' principle: We consider all atom pairs that have a ``van der Waals'' contact, but instead of adding a constant for each pair $(a,b)$ we add a ``chemical weight'' that depends on the atom pair $(a,b)$. We tested our docking algorithm with a test set that contains the test examples of Norel et al.~\cite{NLWN94} and \protect{Fischer} et al.~\cite{FLWN95} and compared the results of our docking algorithm with the results of Norel et al.~\cite{NLWN94,NLWN95}, with the results of Fischer et al.~\cite{FLWN95} and with the results of Meyer et al.~\cite{MWS96}. In 32 of 35 test examples the best conformation with respect to the fitness function was an approximation of the real conformation.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Lenhof, Hans-Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T New contact measures for the protein docking problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9F7D-3 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 10 p. %X We have developed and implemented a parallel distributed algorithm for the rigid-body protein docking problem. The algorithm is based on a new fitness function for evaluating the surface matching of a given conformation. The fitness function is defined as the weighted sum of two contact measures, the {\em geometric contact measure} and the {\em chemical contact measure}. The geometric contact measure measures the ``size'' of the contact area of two molecules. It is a potential function that counts the ``van der Waals contacts'' between the atoms of the two molecules (the algorithm does not compute the Lennard-Jones potential). The chemical contact measure is also based on the ``van der Waals contacts'' principle: We consider all atom pairs that have a ``van der Waals'' contact, but instead of adding a constant for each pair $(a,b)$ we add a ``chemical weight'' that depends on the atom pair $(a,b)$. We tested our docking algorithm with a test set that contains the test examples of Norel et al.~\cite{NLWN94} and \protect{Fischer} et al.~\cite{FLWN95} and compared the results of our docking algorithm with the results of Norel et al.~\cite{NLWN94,NLWN95}, with the results of Fischer et al.~\cite{FLWN95} and with the results of Meyer et al.~\cite{MWS96}. In 32 of 35 test examples the best conformation with respect to the fitness function was an approximation of the real conformation. %B Research Report
[149]
S. Leonardi and A. P. Marchetti-Spaccamela, “Randomized on-line call control revisited,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-97-1-023, 1997.
Abstract
We consider the on-line problem of call admission and routing on trees and meshes. Previous work considered randomized algorithms and analyzed the {\em competitive ratio} of the algorithms. However, these previous algorithms could obtain very low profit with high probability. We investigate the question if it is possible to devise on-line competitive algorithms for these problems that would guarantee a ``good'' solution with ``good'' probability. We give a new family of randomized algorithms with provably optimal (up to constant factors) competitive ratios, and provably good probability to get a profit close to the expectation. We also give lower bounds that show bounds on how high the probability of such algorithms, to get a profit close to the expectation, can be. We also see this work as a first step towards understanding how well can the profit of an competitively-optimal randomized on-line algorithm be concentrated around its expectation.
Export
BibTeX
@techreport{LeonardiMarchetti-SpaccamelaPresciuttiRosten, TITLE = {Randomized on-line call control revisited}, AUTHOR = {Leonardi, Stefano and Marchetti-Spaccamela, Alessio Presciutti}, LANGUAGE = {eng}, NUMBER = {MPI-I-97-1-023}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We consider the on-line problem of call admission and routing on trees and meshes. Previous work considered randomized algorithms and analyzed the {\em competitive ratio} of the algorithms. However, these previous algorithms could obtain very low profit with high probability. We investigate the question if it is possible to devise on-line competitive algorithms for these problems that would guarantee a ``good'' solution with ``good'' probability. We give a new family of randomized algorithms with provably optimal (up to constant factors) competitive ratios, and provably good probability to get a profit close to the expectation. We also give lower bounds that show bounds on how high the probability of such algorithms, to get a profit close to the expectation, can be. We also see this work as a first step towards understanding how well can the profit of an competitively-optimal randomized on-line algorithm be concentrated around its expectation.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Leonardi, Stefano %A Marchetti-Spaccamela, Alessio Presciutti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Randomized on-line call control revisited : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D6E-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 19 p. %X We consider the on-line problem of call admission and routing on trees and meshes. Previous work considered randomized algorithms and analyzed the {\em competitive ratio} of the algorithms. However, these previous algorithms could obtain very low profit with high probability. We investigate the question if it is possible to devise on-line competitive algorithms for these problems that would guarantee a ``good'' solution with ``good'' probability. We give a new family of randomized algorithms with provably optimal (up to constant factors) competitive ratios, and provably good probability to get a profit close to the expectation. We also give lower bounds that show bounds on how high the probability of such algorithms, to get a profit close to the expectation, can be. We also see this work as a first step towards understanding how well can the profit of an competitively-optimal randomized on-line algorithm be concentrated around its expectation. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[150]
M. Lermen and K. Reinert, “The practical use of the A* algorithm for exact multiple sequence alignment,” MPI-I-97-1-028, 1997.
Abstract
Multiple alignment is an important problem in computational biology. It is well known that it can be solved exactly by a dynamic programming algorithm which in turn can be interpreted as a shortest path computation in a directed acyclic graph. The $\cal{A}^*$ algorithm (or goal directed unidirectional search) is a technique that speeds up the computation of a shortest path by transforming the edge lengths without losing the optimality of the shortest path. We implemented the $\cal{A}^*$ algorithm in a computer program similar to MSA~\cite{GupKecSch95} and FMA~\cite{ShiIma97}. We incorporated in this program new bounding strategies for both, lower and upper bounds and show that the $\cal{A}^*$ algorithm, together with our improvements, can speed up comput ations considerably. Additionally we show that the $\cal{A}^*$ algorithm together with a standard bounding technique is superior to the well known Carillo-Lipman bounding since it excludes more nodes from consideration.
Export
BibTeX
@techreport{LermenReinert97, TITLE = {The practical use of the A* algorithm for exact multiple sequence alignment}, AUTHOR = {Lermen, Martin and Reinert, Knut}, LANGUAGE = {eng}, NUMBER = {MPI-I-97-1-028}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {Multiple alignment is an important problem in computational biology. It is well known that it can be solved exactly by a dynamic programming algorithm which in turn can be interpreted as a shortest path computation in a directed acyclic graph. The $\cal{A}^*$ algorithm (or goal directed unidirectional search) is a technique that speeds up the computation of a shortest path by transforming the edge lengths without losing the optimality of the shortest path. We implemented the $\cal{A}^*$ algorithm in a computer program similar to MSA~\cite{GupKecSch95} and FMA~\cite{ShiIma97}. We incorporated in this program new bounding strategies for both, lower and upper bounds and show that the $\cal{A}^*$ algorithm, together with our improvements, can speed up comput ations considerably. Additionally we show that the $\cal{A}^*$ algorithm together with a standard bounding technique is superior to the well known Carillo-Lipman bounding since it excludes more nodes from consideration.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Lermen, Martin %A Reinert, Knut %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The practical use of the A* algorithm for exact multiple sequence alignment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9CD5-4 %D 1997 %X Multiple alignment is an important problem in computational biology. It is well known that it can be solved exactly by a dynamic programming algorithm which in turn can be interpreted as a shortest path computation in a directed acyclic graph. The $\cal{A}^*$ algorithm (or goal directed unidirectional search) is a technique that speeds up the computation of a shortest path by transforming the edge lengths without losing the optimality of the shortest path. We implemented the $\cal{A}^*$ algorithm in a computer program similar to MSA~\cite{GupKecSch95} and FMA~\cite{ShiIma97}. We incorporated in this program new bounding strategies for both, lower and upper bounds and show that the $\cal{A}^*$ algorithm, together with our improvements, can speed up comput ations considerably. Additionally we show that the $\cal{A}^*$ algorithm together with a standard bounding technique is superior to the well known Carillo-Lipman bounding since it excludes more nodes from consideration. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[151]
P. Mutzel, “An alternative method to crossing minimization on hierarchical graphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-008, 1997.
Abstract
A common method for drawing directed graphs is, as a first step, to partition the vertices into a set of $k$ levels and then, as a second step, to permute the verti ces within the levels such that the number of crossings is minimized. We suggest an alternative method for the second step, namely, removing the minimal number of edges such that the resulting graph is $k$-level planar. For the final diagram the removed edges are reinserted into a $k$-level planar drawing. Hence, i nstead of considering the $k$-level crossing minimization problem, we suggest solv ing the $k$-level planarization problem. In this paper we address the case $k=2$. First, we give a motivation for our appro ach. Then, we address the problem of extracting a 2-level planar subgraph of maximum we ight in a given 2-level graph. This problem is NP-hard. Based on a characterizatio n of 2-level planar graphs, we give an integer linear programming formulation for the 2-level planarization problem. Moreover, we define and investigate the polytop e $\2LPS(G)$ associated with the set of all 2-level planar subgraphs of a given 2 -level graph $G$. We will see that this polytope has full dimension and that the i nequalities occuring in the integer linear description are facet-defining for $\2L PS(G)$. The inequalities in the integer linear programming formulation can be separated in polynomial time, hence they can be used efficiently in a branch-and-cut method fo r solving practical instances of the 2-level planarization problem. Furthermore, we derive new inequalities that substantially improve the quality of the obtained solution. We report on extensive computational results.
Export
BibTeX
@techreport{Mutzel97, TITLE = {An alternative method to crossing minimization on hierarchical graphs}, AUTHOR = {Mutzel, Petra}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {A common method for drawing directed graphs is, as a first step, to partition the vertices into a set of $k$ levels and then, as a second step, to permute the verti ces within the levels such that the number of crossings is minimized. We suggest an alternative method for the second step, namely, removing the minimal number of edges such that the resulting graph is $k$-level planar. For the final diagram the removed edges are reinserted into a $k$-level planar drawing. Hence, i nstead of considering the $k$-level crossing minimization problem, we suggest solv ing the $k$-level planarization problem. In this paper we address the case $k=2$. First, we give a motivation for our appro ach. Then, we address the problem of extracting a 2-level planar subgraph of maximum we ight in a given 2-level graph. This problem is NP-hard. Based on a characterizatio n of 2-level planar graphs, we give an integer linear programming formulation for the 2-level planarization problem. Moreover, we define and investigate the polytop e $\2LPS(G)$ associated with the set of all 2-level planar subgraphs of a given 2 -level graph $G$. We will see that this polytope has full dimension and that the i nequalities occuring in the integer linear description are facet-defining for $\2L PS(G)$. The inequalities in the integer linear programming formulation can be separated in polynomial time, hence they can be used efficiently in a branch-and-cut method fo r solving practical instances of the 2-level planarization problem. Furthermore, we derive new inequalities that substantially improve the quality of the obtained solution. We report on extensive computational results.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Mutzel, Petra %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T An alternative method to crossing minimization on hierarchical graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9E22-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 15 p. %X A common method for drawing directed graphs is, as a first step, to partition the vertices into a set of $k$ levels and then, as a second step, to permute the verti ces within the levels such that the number of crossings is minimized. We suggest an alternative method for the second step, namely, removing the minimal number of edges such that the resulting graph is $k$-level planar. For the final diagram the removed edges are reinserted into a $k$-level planar drawing. Hence, i nstead of considering the $k$-level crossing minimization problem, we suggest solv ing the $k$-level planarization problem. In this paper we address the case $k=2$. First, we give a motivation for our appro ach. Then, we address the problem of extracting a 2-level planar subgraph of maximum we ight in a given 2-level graph. This problem is NP-hard. Based on a characterizatio n of 2-level planar graphs, we give an integer linear programming formulation for the 2-level planarization problem. Moreover, we define and investigate the polytop e $\2LPS(G)$ associated with the set of all 2-level planar subgraphs of a given 2 -level graph $G$. We will see that this polytope has full dimension and that the i nequalities occuring in the integer linear description are facet-defining for $\2L PS(G)$. The inequalities in the integer linear programming formulation can be separated in polynomial time, hence they can be used efficiently in a branch-and-cut method fo r solving practical instances of the 2-level planarization problem. Furthermore, we derive new inequalities that substantially improve the quality of the obtained solution. We report on extensive computational results. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[152]
C. Rüb, “On Batcher’s Merge Sorts as Parallel Sorting Algorithms,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-012, 1997.
Abstract
In this paper we examine the average running times of Batcher's bitonic merge and Batcher's odd-even merge when they are used as parallel merging algorithms. It has been shown previously that the running time of odd-even merge can be upper bounded by a function of the maximal rank difference for elements in the two input sequences. Here we give an almost matching lower bound for odd-even merge as well as a similar upper bound for (a special version of) bitonic merge. >From this follows that the average running time of odd-even merge (bitonic merge) is $\Theta((n/p)(1+\log(1+p^2/n)))$ ($O((n/p)(1+\log(1+p^2/n)))$, resp.) where $n$ is the size of the input and $p$ is the number of processors used. Using these results we then show that the average running times of odd-even merge sort and bitonic merge sort are $O((n/p)(\log n + (\log(1+p^2/n))^2))$, that is, the two algorithms are optimal on the average if $n\geq p^2/2^{\sqrt{\log p}}$. The derived bounds do not allow to compare the two sorting algorithms program, for various sizes of input and numbers of processors.
Export
BibTeX
@techreport{Rub97, TITLE = {On Batcher's Merge Sorts as Parallel Sorting Algorithms}, AUTHOR = {R{\"u}b, Christine}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-012}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {In this paper we examine the average running times of Batcher's bitonic merge and Batcher's odd-even merge when they are used as parallel merging algorithms. It has been shown previously that the running time of odd-even merge can be upper bounded by a function of the maximal rank difference for elements in the two input sequences. Here we give an almost matching lower bound for odd-even merge as well as a similar upper bound for (a special version of) bitonic merge. >From this follows that the average running time of odd-even merge (bitonic merge) is $\Theta((n/p)(1+\log(1+p^2/n)))$ ($O((n/p)(1+\log(1+p^2/n)))$, resp.) where $n$ is the size of the input and $p$ is the number of processors used. Using these results we then show that the average running times of odd-even merge sort and bitonic merge sort are $O((n/p)(\log n + (\log(1+p^2/n))^2))$, that is, the two algorithms are optimal on the average if $n\geq p^2/2^{\sqrt{\log p}}$. The derived bounds do not allow to compare the two sorting algorithms program, for various sizes of input and numbers of processors.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A R&#252;b, Christine %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Batcher's Merge Sorts as Parallel Sorting Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9E16-4 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 23 p. %X In this paper we examine the average running times of Batcher's bitonic merge and Batcher's odd-even merge when they are used as parallel merging algorithms. It has been shown previously that the running time of odd-even merge can be upper bounded by a function of the maximal rank difference for elements in the two input sequences. Here we give an almost matching lower bound for odd-even merge as well as a similar upper bound for (a special version of) bitonic merge. >From this follows that the average running time of odd-even merge (bitonic merge) is $\Theta((n/p)(1+\log(1+p^2/n)))$ ($O((n/p)(1+\log(1+p^2/n)))$, resp.) where $n$ is the size of the input and $p$ is the number of processors used. Using these results we then show that the average running times of odd-even merge sort and bitonic merge sort are $O((n/p)(\log n + (\log(1+p^2/n))^2))$, that is, the two algorithms are optimal on the average if $n\geq p^2/2^{\sqrt{\log p}}$. The derived bounds do not allow to compare the two sorting algorithms program, for various sizes of input and numbers of processors. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[153]
S. Schirra, “Designing a Computational Geometry Algorithms Library,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-014, 1997.
Abstract
In these notes, which were originally written as lecture notes for Advanced School on Algorithmic Foundations of Geographic Information Systems, CISM, held in Udine, Italy, in September, 1996, we discuss issues related to the design of a computational geometry algorithms library. We discuss modularity and generality, efficiency and robustness, and ease of use. We argue that exact geometric computation is the most promising approach to ensure robustness in a geometric algorithms library. Many of the presented concepts have been developed jointly in the kernel design group of CGAL and/or in the geometry group of LEDA. However, the view held in these notes is a personal view, not the official view of CGAL.
Export
BibTeX
@techreport{Schirra97, TITLE = {Designing a Computational Geometry Algorithms Library}, AUTHOR = {Schirra, Stefan}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-014}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {In these notes, which were originally written as lecture notes for Advanced School on Algorithmic Foundations of Geographic Information Systems, CISM, held in Udine, Italy, in September, 1996, we discuss issues related to the design of a computational geometry algorithms library. We discuss modularity and generality, efficiency and robustness, and ease of use. We argue that exact geometric computation is the most promising approach to ensure robustness in a geometric algorithms library. Many of the presented concepts have been developed jointly in the kernel design group of CGAL and/or in the geometry group of LEDA. However, the view held in these notes is a personal view, not the official view of CGAL.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Designing a Computational Geometry Algorithms Library : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D89-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 8 p. %X In these notes, which were originally written as lecture notes for Advanced School on Algorithmic Foundations of Geographic Information Systems, CISM, held in Udine, Italy, in September, 1996, we discuss issues related to the design of a computational geometry algorithms library. We discuss modularity and generality, efficiency and robustness, and ease of use. We argue that exact geometric computation is the most promising approach to ensure robustness in a geometric algorithms library. Many of the presented concepts have been developed jointly in the kernel design group of CGAL and/or in the geometry group of LEDA. However, the view held in these notes is a personal view, not the official view of CGAL. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[154]
J. Sibeyn, “From parallel to external list ranking,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-021, 1997.
Abstract
Novel algorithms are presented for parallel and external memory list-ranking. The same algorithms can be used for computing basic tree functions, such as the depth of a node. The parallel algorithm stands out through its low memory use, its simplicity and its performance. For a large range of problem sizes, it is almost as fast as the fastest previous algorithms. On a Paragon with 100 PUs, each holding 10^6 nodes, we obtain speed-up 25. For external-memory list-ranking, the best algorithm so far is an optimized version of independent-set-removal. Actually, this algorithm is not good at all: for a list of length N, the paging volume is about 72 N. Our new algorithm reduces this to 18 N. The algorithm has been implemented, and the theoretical results are confirmed.
Export
BibTeX
@techreport{Sibeyn97, TITLE = {From parallel to external list ranking}, AUTHOR = {Sibeyn, Jop}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-021}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {Novel algorithms are presented for parallel and external memory list-ranking. The same algorithms can be used for computing basic tree functions, such as the depth of a node. The parallel algorithm stands out through its low memory use, its simplicity and its performance. For a large range of problem sizes, it is almost as fast as the fastest previous algorithms. On a Paragon with 100 PUs, each holding 10^6 nodes, we obtain speed-up 25. For external-memory list-ranking, the best algorithm so far is an optimized version of independent-set-removal. Actually, this algorithm is not good at all: for a list of length N, the paging volume is about 72 N. Our new algorithm reduces this to 18 N. The algorithm has been implemented, and the theoretical results are confirmed.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sibeyn, Jop %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T From parallel to external list ranking : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D76-1 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 15 p. %X Novel algorithms are presented for parallel and external memory list-ranking. The same algorithms can be used for computing basic tree functions, such as the depth of a node. The parallel algorithm stands out through its low memory use, its simplicity and its performance. For a large range of problem sizes, it is almost as fast as the fastest previous algorithms. On a Paragon with 100 PUs, each holding 10^6 nodes, we obtain speed-up 25. For external-memory list-ranking, the best algorithm so far is an optimized version of independent-set-removal. Actually, this algorithm is not good at all: for a list of length N, the paging volume is about 72 N. Our new algorithm reduces this to 18 N. The algorithm has been implemented, and the theoretical results are confirmed. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[155]
J. Sibeyn and M. Kaufmann, “BSP-like external-memory computation,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-001, 1997.
Abstract
In this paper we present a paradigm for solving external-memory problems, and illustrate it by algorithms for matrix multiplication, sorting, list ranking, transitive closure and FFT. Our paradigm is based on the use of BSP algorithms. The correspondence is almost perfect, and especially the notion of x-optimality carries over to algorithms designed according to our paradigm. The advantages of the approach are similar to the advantages of BSP algorithms for parallel computing: scalability, portability, predictability. The performance measure here is the total work, not only the number of I/O operations as in previous approaches. The predicted performances are therefore more useful for practical applications.
Export
BibTeX
@techreport{SibeynKaufmann97, TITLE = {{BSP}-like external-memory computation}, AUTHOR = {Sibeyn, Jop and Kaufmann, Michael}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {In this paper we present a paradigm for solving external-memory problems, and illustrate it by algorithms for matrix multiplication, sorting, list ranking, transitive closure and FFT. Our paradigm is based on the use of BSP algorithms. The correspondence is almost perfect, and especially the notion of x-optimality carries over to algorithms designed according to our paradigm. The advantages of the approach are similar to the advantages of BSP algorithms for parallel computing: scalability, portability, predictability. The performance measure here is the total work, not only the number of I/O operations as in previous approaches. The predicted performances are therefore more useful for practical applications.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Sibeyn, Jop %A Kaufmann, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T BSP-like external-memory computation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9FD6-C %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 14 p. %X In this paper we present a paradigm for solving external-memory problems, and illustrate it by algorithms for matrix multiplication, sorting, list ranking, transitive closure and FFT. Our paradigm is based on the use of BSP algorithms. The correspondence is almost perfect, and especially the notion of x-optimality carries over to algorithms designed according to our paradigm. The advantages of the approach are similar to the advantages of BSP algorithms for parallel computing: scalability, portability, predictability. The performance measure here is the total work, not only the number of I/O operations as in previous approaches. The predicted performances are therefore more useful for practical applications. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[156]
M. Thorup, “Faster deterministic sorting and priority queues in linear space,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-016, 1997.
Abstract
The RAM complexity of deterministic linear space sorting of integers in words is improved from $O(n\sqrt{\log n})$ to $O(n(\log\log n)^2)$. No better bounds are known for polynomial space. In fact, the techniques give a deterministic linear space priority queue supporting insert and delete in $O((\log\log n)^2)$ amortized time and find-min in constant time. The priority queue can be implemented using addition, shift, and bit-wise boolean operations.
Export
BibTeX
@techreport{Mikkel97, TITLE = {Faster deterministic sorting and priority queues in linear space}, AUTHOR = {Thorup, Mikkel}, LANGUAGE = {eng}, NUMBER = {MPI-I-1997-1-016}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {The RAM complexity of deterministic linear space sorting of integers in words is improved from $O(n\sqrt{\log n})$ to $O(n(\log\log n)^2)$. No better bounds are known for polynomial space. In fact, the techniques give a deterministic linear space priority queue supporting insert and delete in $O((\log\log n)^2)$ amortized time and find-min in constant time. The priority queue can be implemented using addition, shift, and bit-wise boolean operations.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Thorup, Mikkel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Faster deterministic sorting and priority queues in linear space : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9D86-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 9 p. %X The RAM complexity of deterministic linear space sorting of integers in words is improved from $O(n\sqrt{\log n})$ to $O(n(\log\log n)^2)$. No better bounds are known for polynomial space. In fact, the techniques give a deterministic linear space priority queue supporting insert and delete in $O((\log\log n)^2)$ amortized time and find-min in constant time. The priority queue can be implemented using addition, shift, and bit-wise boolean operations. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[157]
Y. Wang, “Bicriteria job sequencing with release dates,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1997-1-005, 1997.
Abstract
We consider the single machine job sequencing problem with release dates. The main purpose of this paper is to investigate efficient and effective approximation algorithms with a bicriteria performance guarantee. That is, for some $(\rho_1, \rho_2)$, they find schedules simultaneously within a factor of $\rho_1$ of the minimum total weighted completion times and within a factor of $\rho_2$ of the minimum makespan. The main results of the paper are summarized as follows. First, we present a new $O(n\log n)$ algorithm with the performance guarantee $\left(1+\frac{1}{\beta}, 1+\beta\right)$ for any $\beta \in [0,1]$. For the problem with integer processing times and release dates, the algorithm has the bicriteria performance guarantee $\left(2-\frac{1}{p_{max}}, 2-\frac{1}{p_{max}}\right)$, where $p_{max}$ is the maximum processing time. Next, we study an elegant approximation algorithm introduced recently by Goemans. We show that its randomized version has expected bicriteria performance guarantee $(1.7735, 1.51)$ and the derandomized version has the guarantee $(1.7735, 2-\frac{1}{p_{max}})$. To establish the performance guarantee, we also use two LP relaxations and some randomization techniques as Goemans does, but take a different approach in the analysis, based on a decomposition theorem. Finally, we present a family of bad instances showing that it is impossible to achieve $\rho_1\leq 1.5$ with this LP lower bound.
Export
BibTeX
@techreport{Wang1997, TITLE = {Bicriteria job sequencing with release dates}, AUTHOR = {Wang, Yaoguang}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-005}, NUMBER = {MPI-I-1997-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1997}, DATE = {1997}, ABSTRACT = {We consider the single machine job sequencing problem with release dates. The main purpose of this paper is to investigate efficient and effective approximation algorithms with a bicriteria performance guarantee. That is, for some $(\rho_1, \rho_2)$, they find schedules simultaneously within a factor of $\rho_1$ of the minimum total weighted completion times and within a factor of $\rho_2$ of the minimum makespan. The main results of the paper are summarized as follows. First, we present a new $O(n\log n)$ algorithm with the performance guarantee $\left(1+\frac{1}{\beta}, 1+\beta\right)$ for any $\beta \in [0,1]$. For the problem with integer processing times and release dates, the algorithm has the bicriteria performance guarantee $\left(2-\frac{1}{p_{max}}, 2-\frac{1}{p_{max}}\right)$, where $p_{max}$ is the maximum processing time. Next, we study an elegant approximation algorithm introduced recently by Goemans. We show that its randomized version has expected bicriteria performance guarantee $(1.7735, 1.51)$ and the derandomized version has the guarantee $(1.7735, 2-\frac{1}{p_{max}})$. To establish the performance guarantee, we also use two LP relaxations and some randomization techniques as Goemans does, but take a different approach in the analysis, based on a decomposition theorem. Finally, we present a family of bad instances showing that it is impossible to achieve $\rho_1\leq 1.5$ with this LP lower bound.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Wang, Yaoguang %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Bicriteria job sequencing with release dates : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-9F79-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-005 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1997 %P 18 p. %X We consider the single machine job sequencing problem with release dates. The main purpose of this paper is to investigate efficient and effective approximation algorithms with a bicriteria performance guarantee. That is, for some $(\rho_1, \rho_2)$, they find schedules simultaneously within a factor of $\rho_1$ of the minimum total weighted completion times and within a factor of $\rho_2$ of the minimum makespan. The main results of the paper are summarized as follows. First, we present a new $O(n\log n)$ algorithm with the performance guarantee $\left(1+\frac{1}{\beta}, 1+\beta\right)$ for any $\beta \in [0,1]$. For the problem with integer processing times and release dates, the algorithm has the bicriteria performance guarantee $\left(2-\frac{1}{p_{max}}, 2-\frac{1}{p_{max}}\right)$, where $p_{max}$ is the maximum processing time. Next, we study an elegant approximation algorithm introduced recently by Goemans. We show that its randomized version has expected bicriteria performance guarantee $(1.7735, 1.51)$ and the derandomized version has the guarantee $(1.7735, 2-\frac{1}{p_{max}})$. To establish the performance guarantee, we also use two LP relaxations and some randomization techniques as Goemans does, but take a different approach in the analysis, based on a decomposition theorem. Finally, we present a family of bad instances showing that it is impossible to achieve $\rho_1\leq 1.5$ with this LP lower bound. %B Research Report / Max-Planck-Institut f&#252;r Informatik
1996
[158]
S. Albers and J. Westbrook, “A survey of self-organizing data structures,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-026, 1996.
Abstract
This paper surveys results in the design and analysis of self-organizing data structures for the search problem. We concentrate on two simple but very popular data structures: the unsorted linear list and the binary search tree. A self-organizing data structure has a rule or algorithm for changing pointers or state data. The self-organizing rule is designed to get the structure into a good state so that future operations can be processed efficiently. Self-organizing data structures differ from constraint structures in that no structural invariant, such as a balance constraint in a binary search tree, has to be satisfied. In the area of self-organizing linear lists we present a series of deterministic and randomized on-line algorithms. We concentrate on competitive algorithms, i.e., algorithms that have a guaranteed performance with respect to an optimal offline algorithm. In the area of binary search trees we present both on-line and off-line algorithms. We also discuss a famous self-organizing on-line rule called splaying and present important theorems and open conjectures on splay trees. In the third part of the paper we show that algorithms for self-organizing lists and trees can be used to build very effective data compression schemes. We report on theoretical and experimental results.
Export
BibTeX
@techreport{AlbersWestbrook96, TITLE = {A survey of self-organizing data structures}, AUTHOR = {Albers, Susanne and Westbrook, Jeffery}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-026}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {This paper surveys results in the design and analysis of self-organizing data structures for the search problem. We concentrate on two simple but very popular data structures: the unsorted linear list and the binary search tree. A self-organizing data structure has a rule or algorithm for changing pointers or state data. The self-organizing rule is designed to get the structure into a good state so that future operations can be processed efficiently. Self-organizing data structures differ from constraint structures in that no structural invariant, such as a balance constraint in a binary search tree, has to be satisfied. In the area of self-organizing linear lists we present a series of deterministic and randomized on-line algorithms. We concentrate on competitive algorithms, i.e., algorithms that have a guaranteed performance with respect to an optimal offline algorithm. In the area of binary search trees we present both on-line and off-line algorithms. We also discuss a famous self-organizing on-line rule called splaying and present important theorems and open conjectures on splay trees. In the third part of the paper we show that algorithms for self-organizing lists and trees can be used to build very effective data compression schemes. We report on theoretical and experimental results.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Albers, Susanne %A Westbrook, Jeffery %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T A survey of self-organizing data structures : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A03D-0 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 39 p. %X This paper surveys results in the design and analysis of self-organizing data structures for the search problem. We concentrate on two simple but very popular data structures: the unsorted linear list and the binary search tree. A self-organizing data structure has a rule or algorithm for changing pointers or state data. The self-organizing rule is designed to get the structure into a good state so that future operations can be processed efficiently. Self-organizing data structures differ from constraint structures in that no structural invariant, such as a balance constraint in a binary search tree, has to be satisfied. In the area of self-organizing linear lists we present a series of deterministic and randomized on-line algorithms. We concentrate on competitive algorithms, i.e., algorithms that have a guaranteed performance with respect to an optimal offline algorithm. In the area of binary search trees we present both on-line and off-line algorithms. We also discuss a famous self-organizing on-line rule called splaying and present important theorems and open conjectures on splay trees. In the third part of the paper we show that algorithms for self-organizing lists and trees can be used to build very effective data compression schemes. We report on theoretical and experimental results. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[159]
S. Arikati, S. Chaudhuri, and C. Zaroliagis, “All-pairs min-cut in sparse networks,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-007, 1996.
Abstract
Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar and sparse networks. The approach used is to preprocess the input $n$-vertex network so that, afterwards, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after an $O(n\log n)$ preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in time $O(n^2)$. This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, $\gamma$, of the input network. The parameter $\gamma$ varies between 1 and $\Theta(n)$; the algorithms perform well when $\gamma = o(n)$. The value of a min-cut can be found in time $O(n + \gamma^2 \log \gamma)$ and all-pairs min-cut can be solved in time $O(n^2 + \gamma^4 \log \gamma)$ for sparse networks. The corresponding running times4 for planar networks are $O(n+\gamma \log \gamma)$ and $O(n^2 + \gamma^3 \log \gamma)$, respectively. The latter bounds depend on a result of independent interest: outerplanar networks have small ``mimicking'' networks which are also outerplanar.
Export
BibTeX
@techreport{ArikatiChaudhuriZaroliagis96, TITLE = {All-pairs min-cut in sparse networks}, AUTHOR = {Arikati, Srinivasa and Chaudhuri, Shiva and Zaroliagis, Christos}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-007}, NUMBER = {MPI-I-1996-1-007}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar and sparse networks. The approach used is to preprocess the input $n$-vertex network so that, afterwards, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after an $O(n\log n)$ preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in time $O(n^2)$. This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, $\gamma$, of the input network. The parameter $\gamma$ varies between 1 and $\Theta(n)$; the algorithms perform well when $\gamma = o(n)$. The value of a min-cut can be found in time $O(n + \gamma^2 \log \gamma)$ and all-pairs min-cut can be solved in time $O(n^2 + \gamma^4 \log \gamma)$ for sparse networks. The corresponding running times4 for planar networks are $O(n+\gamma \log \gamma)$ and $O(n^2 + \gamma^3 \log \gamma)$, respectively. The latter bounds depend on a result of independent interest: outerplanar networks have small ``mimicking'' networks which are also outerplanar.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Arikati, Srinivasa %A Chaudhuri, Shiva %A Zaroliagis, Christos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T All-pairs min-cut in sparse networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A418-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-007 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 27 p. %X Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar and sparse networks. The approach used is to preprocess the input $n$-vertex network so that, afterwards, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after an $O(n\log n)$ preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in time $O(n^2)$. This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, $\gamma$, of the input network. The parameter $\gamma$ varies between 1 and $\Theta(n)$; the algorithms perform well when $\gamma = o(n)$. The value of a min-cut can be found in time $O(n + \gamma^2 \log \gamma)$ and all-pairs min-cut can be solved in time $O(n^2 + \gamma^4 \log \gamma)$ for sparse networks. The corresponding running times4 for planar networks are $O(n+\gamma \log \gamma)$ and $O(n^2 + \gamma^3 \log \gamma)$, respectively. The latter bounds depend on a result of independent interest: outerplanar networks have small ``mimicking'' networks which are also outerplanar. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[160]
P. G. Bradford and K. Reinert, “Lower bounds for row minima searching,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-029, 1996.
Abstract
This paper shows that finding the row minima (maxima) in an $n \times n$ totally monotone matrix in the worst case requires any algorithm to make $3n-5$ comparisons or $4n -5$ matrix accesses. Where the, so called, SMAWK algorithm of Aggarwal {\em et al\/.} finds the row minima in no more than $5n -2 \lg n - 6$ comparisons.
Export
BibTeX
@techreport{BradfordReinert96, TITLE = {Lower bounds for row minima searching}, AUTHOR = {Bradford, Phillip Gnassi and Reinert, Knut}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-029}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {This paper shows that finding the row minima (maxima) in an $n \times n$ totally monotone matrix in the worst case requires any algorithm to make $3n-5$ comparisons or $4n -5$ matrix accesses. Where the, so called, SMAWK algorithm of Aggarwal {\em et al\/.} finds the row minima in no more than $5n -2 \lg n -- 6$ comparisons.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Bradford, Phillip Gnassi %A Reinert, Knut %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Lower bounds for row minima searching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A021-C %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 12 p. %X This paper shows that finding the row minima (maxima) in an $n \times n$ totally monotone matrix in the worst case requires any algorithm to make $3n-5$ comparisons or $4n -5$ matrix accesses. Where the, so called, SMAWK algorithm of Aggarwal {\em et al\/.} finds the row minima in no more than $5n -2 \lg n - 6$ comparisons. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[161]
D. Breslauer, T. Jiang, and Z. Jiang, “Rotations of periodic strings and short superstrings,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-019, 1996.
Abstract
This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios $2 {2\over 3}$ ($\approx 2.67$) and $2 {25\over 42}$ ($\approx 2.596$), improving the best previously published $2 {3\over 4}$ approximation. The framework of our improved algorithms is similar to that of previous algorithms in the sense that they construct a superstring by computing some optimal cycle covers on the distance graph of the given strings, and then break and merge the cycles to finally obtain a Hamiltonian path, but we make use of new bounds on the overlap between two strings. We prove that for each periodic semi-infinite string $\alpha = a_1 a_2 \cdots$ of period $q$, there exists an integer $k$, such that for {\em any} (finite) string $s$ of period $p$ which is {\em inequivalent} to $\alpha$, the overlap between $s$ and the {\em rotation} $\alpha[k] = a_k a_{k+1} \cdots$ is at most $p+{1\over 2}q$. Moreover, if $p \leq q$, then the overlap between $s$ and $\alpha[k]$ is not larger than ${2\over 3}(p+q)$. In the previous shortest superstring algorithms $p+q$ was used as the standard bound on overlap between two strings with periods $p$ and $q$.
Export
BibTeX
@techreport{BreslauerJiangZhigen97, TITLE = {Rotations of periodic strings and short superstrings}, AUTHOR = {Breslauer, Dany and Jiang, Tao and Jiang, Zhigen}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-019}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios $2 {2\over 3}$ ($\approx 2.67$) and $2 {25\over 42}$ ($\approx 2.596$), improving the best previously published $2 {3\over 4}$ approximation. The framework of our improved algorithms is similar to that of previous algorithms in the sense that they construct a superstring by computing some optimal cycle covers on the distance graph of the given strings, and then break and merge the cycles to finally obtain a Hamiltonian path, but we make use of new bounds on the overlap between two strings. We prove that for each periodic semi-infinite string $\alpha = a_1 a_2 \cdots$ of period $q$, there exists an integer $k$, such that for {\em any} (finite) string $s$ of period $p$ which is {\em inequivalent} to $\alpha$, the overlap between $s$ and the {\em rotation} $\alpha[k] = a_k a_{k+1} \cdots$ is at most $p+{1\over 2}q$. Moreover, if $p \leq q$, then the overlap between $s$ and $\alpha[k]$ is not larger than ${2\over 3}(p+q)$. In the previous shortest superstring algorithms $p+q$ was used as the standard bound on overlap between two strings with periods $p$ and $q$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Breslauer, Dany %A Jiang, Tao %A Jiang, Zhigen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Rotations of periodic strings and short superstrings : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A17F-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 13 p. %X This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios $2 {2\over 3}$ ($\approx 2.67$) and $2 {25\over 42}$ ($\approx 2.596$), improving the best previously published $2 {3\over 4}$ approximation. The framework of our improved algorithms is similar to that of previous algorithms in the sense that they construct a superstring by computing some optimal cycle covers on the distance graph of the given strings, and then break and merge the cycles to finally obtain a Hamiltonian path, but we make use of new bounds on the overlap between two strings. We prove that for each periodic semi-infinite string $\alpha = a_1 a_2 \cdots$ of period $q$, there exists an integer $k$, such that for {\em any} (finite) string $s$ of period $p$ which is {\em inequivalent} to $\alpha$, the overlap between $s$ and the {\em rotation} $\alpha[k] = a_k a_{k+1} \cdots$ is at most $p+{1\over 2}q$. Moreover, if $p \leq q$, then the overlap between $s$ and $\alpha[k]$ is not larger than ${2\over 3}(p+q)$. In the previous shortest superstring algorithms $p+q$ was used as the standard bound on overlap between two strings with periods $p$ and $q$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[162]
G. S. Brodal, S. Chaudhuri, and J. Radhakrishnan, “The randomized complexity of maintaining the minimum,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-014, 1996.
Abstract
The complexity of maintaining a set under the operations {\sf Insert}, {\sf Delete} and {\sf FindMin} is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost $t$ comparisons per {\sf Insert} and {\sf Delete} has expected cost at least $n/(e2^{2t})-1$ comparisons for {\sf FindMin}. If {\sf FindMin} 474 is replaced by a weaker operation, {\sf FindAny}, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.
Export
BibTeX
@techreport{BrodalChaudhuriRadhakrishnan96, TITLE = {The randomized complexity of maintaining the minimum}, AUTHOR = {Brodal, Gerth St{\o}lting and Chaudhuri, Shiva and Radhakrishnan, Jaikumar}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-014}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {The complexity of maintaining a set under the operations {\sf Insert}, {\sf Delete} and {\sf FindMin} is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost $t$ comparisons per {\sf Insert} and {\sf Delete} has expected cost at least $n/(e2^{2t})-1$ comparisons for {\sf FindMin}. If {\sf FindMin} 474 is replaced by a weaker operation, {\sf FindAny}, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Brodal, Gerth St&#248;lting %A Chaudhuri, Shiva %A Radhakrishnan, Jaikumar %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T The randomized complexity of maintaining the minimum : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A18C-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 12 p. %X The complexity of maintaining a set under the operations {\sf Insert}, {\sf Delete} and {\sf FindMin} is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost $t$ comparisons per {\sf Insert} and {\sf Delete} has expected cost at least $n/(e2^{2t})-1$ comparisons for {\sf FindMin}. If {\sf FindMin} 474 is replaced by a weaker operation, {\sf FindAny}, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[163]
C. Burnikel, K. Mehlhorn, and S. Schirra, “The LEDA class real number,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-001, 1996.
Abstract
We describe the implementation of the LEDA data type {\bf real}. Every integer is a real and reals are closed under the operations addition, subtraction, multiplication, division and squareroot. The main features of the data type real are \begin{itemize} \item The user--interface is similar to that of the built--in data type double. \item All comparison operators $\{>, \geq, <, \leq, =\}$ are {\em exact}. In order to determine the sign of a real number $x$ the data type first computes a rational number $q$ such that $|x| \leq q$ implies $x = 0$ and then computes an approximation of $x$ of sufficient precision to decide the sign of $x$. The user may assist the data type by providing a separation bound $q$. \item The data type also allows to evaluate real expressions with arbitrary precision. One may either set the mantissae length of the underlying floating point system and then evaluate the expression with that mantissa length or one may specify an error bound $q$. The data type then computes an approximation with absolute error at most $q$. \end{itemize} The implementation of the data type real is based on the LEDA data types {\bf integer} and {\bf bigfloat} which are the types of arbitrary precision integers and floating point numbers, respectively.The implementation takes various shortcuts for increased efficiency, e.g., a {\bf double} approximation of any real number together with an error bound is maintained and tests are first performed on these approximations. A high precision computation is only started when the test on the {\bf double} approximation is inconclusive.
Export
BibTeX
@techreport{BurnikelMehlhornSchirra96, TITLE = {The {LEDA} class real number}, AUTHOR = {Burnikel, Christoph and Mehlhorn, Kurt and Schirra, Stefan}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-001}, NUMBER = {MPI-I-1996-1-001}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We describe the implementation of the LEDA data type {\bf real}. Every integer is a real and reals are closed under the operations addition, subtraction, multiplication, division and squareroot. The main features of the data type real are \begin{itemize} \item The user--interface is similar to that of the built--in data type double. \item All comparison operators $\{>, \geq, <, \leq, =\}$ are {\em exact}. In order to determine the sign of a real number $x$ the data type first computes a rational number $q$ such that $|x| \leq q$ implies $x = 0$ and then computes an approximation of $x$ of sufficient precision to decide the sign of $x$. The user may assist the data type by providing a separation bound $q$. \item The data type also allows to evaluate real expressions with arbitrary precision. One may either set the mantissae length of the underlying floating point system and then evaluate the expression with that mantissa length or one may specify an error bound $q$. The data type then computes an approximation with absolute error at most $q$. \end{itemize} The implementation of the data type real is based on the LEDA data types {\bf integer} and {\bf bigfloat} which are the types of arbitrary precision integers and floating point numbers, respectively.The implementation takes various shortcuts for increased efficiency, e.g., a {\bf double} approximation of any real number together with an error bound is maintained and tests are first performed on these approximations. A high precision computation is only started when the test on the {\bf double} approximation is inconclusive.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Burnikel, Christoph %A Mehlhorn, Kurt %A Schirra, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The LEDA class real number : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1AD-E %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-001 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 52 p. %X We describe the implementation of the LEDA data type {\bf real}. Every integer is a real and reals are closed under the operations addition, subtraction, multiplication, division and squareroot. The main features of the data type real are \begin{itemize} \item The user--interface is similar to that of the built--in data type double. \item All comparison operators $\{>, \geq, <, \leq, =\}$ are {\em exact}. In order to determine the sign of a real number $x$ the data type first computes a rational number $q$ such that $|x| \leq q$ implies $x = 0$ and then computes an approximation of $x$ of sufficient precision to decide the sign of $x$. The user may assist the data type by providing a separation bound $q$. \item The data type also allows to evaluate real expressions with arbitrary precision. One may either set the mantissae length of the underlying floating point system and then evaluate the expression with that mantissa length or one may specify an error bound $q$. The data type then computes an approximation with absolute error at most $q$. \end{itemize} The implementation of the data type real is based on the LEDA data types {\bf integer} and {\bf bigfloat} which are the types of arbitrary precision integers and floating point numbers, respectively.The implementation takes various shortcuts for increased efficiency, e.g., a {\bf double} approximation of any real number together with an error bound is maintained and tests are first performed on these approximations. A high precision computation is only started when the test on the {\bf double} approximation is inconclusive. %B Research Report
[164]
C. Burnikel and J. Könemann, “High-precision floating point numbers in LEDA,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-002, 1996.
Export
BibTeX
@techreport{BurnikelKoenemann96, TITLE = {High-precision floating point numbers in {LEDA}}, AUTHOR = {Burnikel, Christoph and K{\"o}nemann, Jochen}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-002}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Burnikel, Christoph %A K&#246;nemann, Jochen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T High-precision floating point numbers in LEDA : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1AA-3 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 47 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[165]
T. Christof, M. Jünger, J. Kececioglou, P. Mutzel, and G. Reinelt, “A branch-and-cut approach to physical mapping with end-probes,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-027, 1996.
Abstract
A fundamental problem in computational biology is the construction of physical maps of chromosomes from hybridization experiments between unique probes and clones of chromosome fragments in the presence of error. Alizadeh, Karp, Weisser and Zweig~\cite{AKWZ94} first considered a maximum-likelihood model of the problem that is equivalent to finding an ordering of the probes that minimizes a weighted sum of errors, and developed several effective heuristics. We show that by exploiting information about the end-probes of clones, this model can be formulated as a weighted Betweenness Problem. This affords the significant advantage of allowing the well-developed tools of integer linear-programming and branch-and-cut algorithms to be brought to bear on physical mapping, enabling us for the first time to solve small mapping instances to optimality even in the presence of high error. We also show that by combining the optimal solution of many small overlapping Betweenness Problems, one can effectively screen errors from larger instances, and solve the edited instance to optimality as a Hamming-Distance Traveling Salesman Problem. This suggests a new combined approach to physical map construction.
Export
BibTeX
@techreport{ChristofJungerKececioglouMutzelReinelt96, TITLE = {A branch-and-cut approach to physical mapping with end-probes}, AUTHOR = {Christof, Thomas and J{\"u}nger, Michael and Kececioglou, John and Mutzel, Petra and Reinelt, Gerhard}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-027}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {A fundamental problem in computational biology is the construction of physical maps of chromosomes from hybridization experiments between unique probes and clones of chromosome fragments in the presence of error. Alizadeh, Karp, Weisser and Zweig~\cite{AKWZ94} first considered a maximum-likelihood model of the problem that is equivalent to finding an ordering of the probes that minimizes a weighted sum of errors, and developed several effective heuristics. We show that by exploiting information about the end-probes of clones, this model can be formulated as a weighted Betweenness Problem. This affords the significant advantage of allowing the well-developed tools of integer linear-programming and branch-and-cut algorithms to be brought to bear on physical mapping, enabling us for the first time to solve small mapping instances to optimality even in the presence of high error. We also show that by combining the optimal solution of many small overlapping Betweenness Problems, one can effectively screen errors from larger instances, and solve the edited instance to optimality as a Hamming-Distance Traveling Salesman Problem. This suggests a new combined approach to physical map construction.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Christof, Thomas %A J&#252;nger, Michael %A Kececioglou, John %A Mutzel, Petra %A Reinelt, Gerhard %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T A branch-and-cut approach to physical mapping with end-probes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A03A-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 10 p. %X A fundamental problem in computational biology is the construction of physical maps of chromosomes from hybridization experiments between unique probes and clones of chromosome fragments in the presence of error. Alizadeh, Karp, Weisser and Zweig~\cite{AKWZ94} first considered a maximum-likelihood model of the problem that is equivalent to finding an ordering of the probes that minimizes a weighted sum of errors, and developed several effective heuristics. We show that by exploiting information about the end-probes of clones, this model can be formulated as a weighted Betweenness Problem. This affords the significant advantage of allowing the well-developed tools of integer linear-programming and branch-and-cut algorithms to be brought to bear on physical mapping, enabling us for the first time to solve small mapping instances to optimality even in the presence of high error. We also show that by combining the optimal solution of many small overlapping Betweenness Problems, one can effectively screen errors from larger instances, and solve the edited instance to optimality as a Hamming-Distance Traveling Salesman Problem. This suggests a new combined approach to physical map construction. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[166]
G. Das, S. Kapoor, and M. Smid, “On the complexity of approximating Euclidean traveling salesman tours and minimum spanning trees,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-006, 1996.
Abstract
We consider the problems of computing $r$-approximate traveling salesman tours and $r$-approximate minimum spanning trees for a set of $n$ points in $\IR^d$, where $d \geq 1$ is a constant. In the algebraic computation tree model, the complexities of both these problems are shown to be $\Theta(n \log n/r)$, for all $n$ and $r$ such that $r<n$ and $r$ is larger than some constant. In the more powerful model of computation that additionally uses the floor function and random access, both problems can be solved in $O(n)$ time if $r = \Theta( n^{1-1/d} )$.
Export
BibTeX
@techreport{DasKapoorSmid96, TITLE = {On the complexity of approximating Euclidean traveling salesman tours and minimum spanning trees}, AUTHOR = {Das, Gautam and Kapoor, Sanjiv and Smid, Michiel}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We consider the problems of computing $r$-approximate traveling salesman tours and $r$-approximate minimum spanning trees for a set of $n$ points in $\IR^d$, where $d \geq 1$ is a constant. In the algebraic computation tree model, the complexities of both these problems are shown to be $\Theta(n \log n/r)$, for all $n$ and $r$ such that $r<n$ and $r$ is larger than some constant. In the more powerful model of computation that additionally uses the floor function and random access, both problems can be solved in $O(n)$ time if $r = \Theta( n^{1-1/d} )$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Das, Gautam %A Kapoor, Sanjiv %A Smid, Michiel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the complexity of approximating Euclidean traveling salesman tours and minimum spanning trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1A1-6 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 14 p. %X We consider the problems of computing $r$-approximate traveling salesman tours and $r$-approximate minimum spanning trees for a set of $n$ points in $\IR^d$, where $d \geq 1$ is a constant. In the algebraic computation tree model, the complexities of both these problems are shown to be $\Theta(n \log n/r)$, for all $n$ and $r$ such that $r<n$ and $r$ is larger than some constant. In the more powerful model of computation that additionally uses the floor function and random access, both problems can be solved in $O(n)$ time if $r = \Theta( n^{1-1/d} )$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[167]
C. De Simone, M. Diehl, M. Jünger, P. Mutzel, G. Reinelt, and G. Rinaldi, “Exact ground states of two-dimensional $\pm J$ Ising Spin Glasses,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-004, 1996.
Abstract
In this paper we study the problem of finding an exact ground state of a two-dimensional $\pm J$ Ising spin glass on a square lattice with nearest neighbor interactions and periodic boundary conditions when there is a concentration $p$ of negative bonds, with $p$ ranging between $0.1$ and $0.9$. With our exact algorithm we can determine ground states of grids of sizes up to $50\times 50$ in a moderate amount of computation time (up to one hour each) for several values of $p$. For the ground state energy of an infinite spin glass system with $p=0.5$ we estimate $E_{0.5}^\infty = -1.4015 \pm0.0008$. We report on extensive computational tests based on more than $22\,000$ experiments.
Export
BibTeX
@techreport{DeSimoneDiehlJuengerMutzelReineltRinaldi96a, TITLE = {Exact ground states of two-dimensional \${\textbackslash}pm J\$ Ising Spin Glasses}, AUTHOR = {De Simone, C. and Diehl, M. and J{\"u}nger, Michael and Mutzel, Petra and Reinelt, Gerhard and Rinaldi, G.}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-004}, NUMBER = {MPI-I-1996-1-004}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {In this paper we study the problem of finding an exact ground state of a two-dimensional $\pm J$ Ising spin glass on a square lattice with nearest neighbor interactions and periodic boundary conditions when there is a concentration $p$ of negative bonds, with $p$ ranging between $0.1$ and $0.9$. With our exact algorithm we can determine ground states of grids of sizes up to $50\times 50$ in a moderate amount of computation time (up to one hour each) for several values of $p$. For the ground state energy of an infinite spin glass system with $p=0.5$ we estimate $E_{0.5}^\infty = -1.4015 \pm0.0008$. We report on extensive computational tests based on more than $22\,000$ experiments.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A De Simone, C. %A Diehl, M. %A J&#252;nger, Michael %A Mutzel, Petra %A Reinelt, Gerhard %A Rinaldi, G. %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Exact ground states of two-dimensional $\pm J$ Ising Spin Glasses : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1A4-F %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-004 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 10 p. %X In this paper we study the problem of finding an exact ground state of a two-dimensional $\pm J$ Ising spin glass on a square lattice with nearest neighbor interactions and periodic boundary conditions when there is a concentration $p$ of negative bonds, with $p$ ranging between $0.1$ and $0.9$. With our exact algorithm we can determine ground states of grids of sizes up to $50\times 50$ in a moderate amount of computation time (up to one hour each) for several values of $p$. For the ground state energy of an infinite spin glass system with $p=0.5$ we estimate $E_{0.5}^\infty = -1.4015 \pm0.0008$. We report on extensive computational tests based on more than $22\,000$ experiments. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[168]
K. Diks and T. Hagerup, “More general parallel tree contraction: Register allocation and broadcasting in a tree,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-024, 1996.
Abstract
We consider arithmetic expressions over operators $+$, $-$, $*$, $/$, and $\sqrt{\ }$, with integer operands. For an expression $E$, a separation bound $sep(E)$ is a positive real number with the property that $E\neq 0$ implies $|E| \geq sep(E)$. We propose a new separation bound that is easy to compute an d stronger than previous bounds.
Export
BibTeX
@techreport{DiksHagerup96, TITLE = {More general parallel tree contraction: Register allocation and broadcasting in a tree}, AUTHOR = {Diks, Krzysztof and Hagerup, Torben}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-024}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We consider arithmetic expressions over operators $+$, $-$, $*$, $/$, and $\sqrt{\ }$, with integer operands. For an expression $E$, a separation bound $sep(E)$ is a positive real number with the property that $E\neq 0$ implies $|E| \geq sep(E)$. We propose a new separation bound that is easy to compute an d stronger than previous bounds.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Diks, Krzysztof %A Hagerup, Torben %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T More general parallel tree contraction: Register allocation and broadcasting in a tree : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A055-7 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 24 p. %X We consider arithmetic expressions over operators $+$, $-$, $*$, $/$, and $\sqrt{\ }$, with integer operands. For an expression $E$, a separation bound $sep(E)$ is a positive real number with the property that $E\neq 0$ implies $|E| \geq sep(E)$. We propose a new separation bound that is easy to compute an d stronger than previous bounds. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[169]
D. P. Dubhashi, V. Priebe, and D. Ranjan, “Negative dependence through the FKG Inequality,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-020, 1996.
Abstract
We investigate random variables arising in occupancy problems, and show the variables to be negatively associated, that is, negatively dependent in a strong sense. Our proofs are based on the FKG correlation inequality, and they suggest a useful, general technique for proving negative dependence among random variables. We also show that in the special case of two binary random variables, the notions of negative correlation and negative association coincide.
Export
BibTeX
@techreport{DubhashiPriebeRanjan96, TITLE = {Negative dependence through the {FKG} Inequality}, AUTHOR = {Dubhashi, Devdatt P. and Priebe, Volker and Ranjan, Desh}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-020}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We investigate random variables arising in occupancy problems, and show the variables to be negatively associated, that is, negatively dependent in a strong sense. Our proofs are based on the FKG correlation inequality, and they suggest a useful, general technique for proving negative dependence among random variables. We also show that in the special case of two binary random variables, the notions of negative correlation and negative association coincide.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Dubhashi, Devdatt P. %A Priebe, Volker %A Ranjan, Desh %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Negative dependence through the FKG Inequality : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A157-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 10 p. %X We investigate random variables arising in occupancy problems, and show the variables to be negatively associated, that is, negatively dependent in a strong sense. Our proofs are based on the FKG correlation inequality, and they suggest a useful, general technique for proving negative dependence among random variables. We also show that in the special case of two binary random variables, the notions of negative correlation and negative association coincide. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[170]
U. Finkler and K. Mehlhorn, “Runtime prediction of real programs on real machines,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-032, 1996.
Abstract
Algorithms are more and more made available as part of libraries or tool kits. For a user of such a library statements of asymptotic running times are almost meaningless as he has no way to estimate the constants involved. To choose the right algorithm for the targeted problem size and the available hardware, knowledge about these constants is important. Methods to determine the constants based on regression analysis or operation counting are not practicable in the general case due to inaccuracy and costs respectively. We present a new general method to determine the implementation and hardware specific running time constants for combinatorial algorithms. This method requires no changes of the implementation of the investigated algorithm and is applicable to a wide range of of programming languages. Only some additional code is necessary. The determined constants are correct within a constant factor which depends only on the hardware platform. As an example the constants of an implementation of a hierarchy of algorithms and data structures are determined. The hierarchy consists of an algorithm for the maximum weighted bipartite matching problem (MWBM), Dijkstra's algorithm, a Fibonacci heap and a graph representation based on adjacency lists. ion frequencies are at most 50 \% on the tested hardware platforms.
Export
BibTeX
@techreport{FinklerMehlhorn96, TITLE = {Runtime prediction of real programs on real machines}, AUTHOR = {Finkler, Ulrich and Mehlhorn, Kurt}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-032}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {Algorithms are more and more made available as part of libraries or tool kits. For a user of such a library statements of asymptotic running times are almost meaningless as he has no way to estimate the constants involved. To choose the right algorithm for the targeted problem size and the available hardware, knowledge about these constants is important. Methods to determine the constants based on regression analysis or operation counting are not practicable in the general case due to inaccuracy and costs respectively. We present a new general method to determine the implementation and hardware specific running time constants for combinatorial algorithms. This method requires no changes of the implementation of the investigated algorithm and is applicable to a wide range of of programming languages. Only some additional code is necessary. The determined constants are correct within a constant factor which depends only on the hardware platform. As an example the constants of an implementation of a hierarchy of algorithms and data structures are determined. The hierarchy consists of an algorithm for the maximum weighted bipartite matching problem (MWBM), Dijkstra's algorithm, a Fibonacci heap and a graph representation based on adjacency lists. ion frequencies are at most 50 \% on the tested hardware platforms.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Finkler, Ulrich %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Runtime prediction of real programs on real machines : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A40D-D %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 10 p. %X Algorithms are more and more made available as part of libraries or tool kits. For a user of such a library statements of asymptotic running times are almost meaningless as he has no way to estimate the constants involved. To choose the right algorithm for the targeted problem size and the available hardware, knowledge about these constants is important. Methods to determine the constants based on regression analysis or operation counting are not practicable in the general case due to inaccuracy and costs respectively. We present a new general method to determine the implementation and hardware specific running time constants for combinatorial algorithms. This method requires no changes of the implementation of the investigated algorithm and is applicable to a wide range of of programming languages. Only some additional code is necessary. The determined constants are correct within a constant factor which depends only on the hardware platform. As an example the constants of an implementation of a hierarchy of algorithms and data structures are determined. The hierarchy consists of an algorithm for the maximum weighted bipartite matching problem (MWBM), Dijkstra's algorithm, a Fibonacci heap and a graph representation based on adjacency lists. ion frequencies are at most 50 \% on the tested hardware platforms. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[171]
N. Garg, S. Chaudhuri, and R. Ravi, “Generalized $k$-Center Problems,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-021, 1996.
Abstract
The $k$-center problem with triangle inequality is that of placing $k$ center nodes in a weighted undirected graph in which the edge weights obey the triangle inequality, so that the maximum distance of any node to its nearest center is minimized. In this paper, we consider a generalization of this problem where, given a number $p$, we wish to place $k$ centers so as to minimize the maximum distance of any node to its $p\th$ closest center. We consider three different versions of this reliable $k$-center problem depending on which of the nodes can serve as centers and non-centers and derive best possible approximation algorithms for all three versions.
Export
BibTeX
@techreport{GargChaudhuriRavi96, TITLE = {Generalized \$k\$-Center Problems}, AUTHOR = {Garg, Naveen and Chaudhuri, Shiva and Ravi, R.}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-021}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {The $k$-center problem with triangle inequality is that of placing $k$ center nodes in a weighted undirected graph in which the edge weights obey the triangle inequality, so that the maximum distance of any node to its nearest center is minimized. In this paper, we consider a generalization of this problem where, given a number $p$, we wish to place $k$ centers so as to minimize the maximum distance of any node to its $p\th$ closest center. We consider three different versions of this reliable $k$-center problem depending on which of the nodes can serve as centers and non-centers and derive best possible approximation algorithms for all three versions.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Garg, Naveen %A Chaudhuri, Shiva %A Ravi, R. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Generalized $k$-Center Problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A121-4 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 9 p. %X The $k$-center problem with triangle inequality is that of placing $k$ center nodes in a weighted undirected graph in which the edge weights obey the triangle inequality, so that the maximum distance of any node to its nearest center is minimized. In this paper, we consider a generalization of this problem where, given a number $p$, we wish to place $k$ centers so as to minimize the maximum distance of any node to its $p\th$ closest center. We consider three different versions of this reliable $k$-center problem depending on which of the nodes can serve as centers and non-centers and derive best possible approximation algorithms for all three versions. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[172]
N. Garg, M. Papatriantafilou, and P. Tsigas, “Distributed list coloring: how to dynamically allocate frequencies to mobile base stations,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-010, 1996.
Abstract
To avoid signal interference in mobile communication it is necessary that the channels used by base stations for broadcast communication within their cells are chosen so that the same channel is never concurrently used by two neighboring stations. We model this channel allocation problem as a {\em generalized list coloring problem} and we provide two distributed solutions, which are also able to cope with crash failures, by limiting the size of the network affected by a faulty station in terms of the distance from that station. Our first solution uses a powerful synchronization mechanism to achieve a response time that depends only on $\Delta$, the maximum degree of the signal interference graph, and a failure locality of 4. Our second solution is a simple randomized solution in which each node can expect to pick $f/4\Delta$ colors where $f$ is the size of the list at the node; the response time of this solution is a constant and the failure locality 1. Besides being efficient (their complexity measures involve only small constants), the protocols presented in this work are simple and easy to apply in practice, provided the existence of distributed infrastructure in networks that are in use.
Export
BibTeX
@techreport{GargPapatriantafilouTsigas96, TITLE = {Distributed list coloring: how to dynamically allocate frequencies to mobile base stations}, AUTHOR = {Garg, Naveen and Papatriantafilou, Marina and Tsigas, Philippas}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-010}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {To avoid signal interference in mobile communication it is necessary that the channels used by base stations for broadcast communication within their cells are chosen so that the same channel is never concurrently used by two neighboring stations. We model this channel allocation problem as a {\em generalized list coloring problem} and we provide two distributed solutions, which are also able to cope with crash failures, by limiting the size of the network affected by a faulty station in terms of the distance from that station. Our first solution uses a powerful synchronization mechanism to achieve a response time that depends only on $\Delta$, the maximum degree of the signal interference graph, and a failure locality of 4. Our second solution is a simple randomized solution in which each node can expect to pick $f/4\Delta$ colors where $f$ is the size of the list at the node; the response time of this solution is a constant and the failure locality 1. Besides being efficient (their complexity measures involve only small constants), the protocols presented in this work are simple and easy to apply in practice, provided the existence of distributed infrastructure in networks that are in use.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Garg, Naveen %A Papatriantafilou, Marina %A Tsigas, Philippas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Distributed list coloring: how to dynamically allocate frequencies to mobile base stations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A198-B %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 15 p. %X To avoid signal interference in mobile communication it is necessary that the channels used by base stations for broadcast communication within their cells are chosen so that the same channel is never concurrently used by two neighboring stations. We model this channel allocation problem as a {\em generalized list coloring problem} and we provide two distributed solutions, which are also able to cope with crash failures, by limiting the size of the network affected by a faulty station in terms of the distance from that station. Our first solution uses a powerful synchronization mechanism to achieve a response time that depends only on $\Delta$, the maximum degree of the signal interference graph, and a failure locality of 4. Our second solution is a simple randomized solution in which each node can expect to pick $f/4\Delta$ colors where $f$ is the size of the list at the node; the response time of this solution is a constant and the failure locality 1. Besides being efficient (their complexity measures involve only small constants), the protocols presented in this work are simple and easy to apply in practice, provided the existence of distributed infrastructure in networks that are in use. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[173]
L. Gasieniec, J. Jansson, A. Lingas, and A. Östlin, “On the complexity of computing evolutionary trees,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-031, 1996.
Abstract
In this paper we study a few important tree optimization problems with applications to computational biology. These problems ask for trees that are consistent with an as large part of the given data as possible. We show that the maximum homeomorphic agreement subtree problem cannot be approximated within a factor of $N^{\epsilon}$, where $N$ is the input size, for any $0 \leq \epsilon < \frac{1}{18}$ in polynomial time, unless P=NP. On the other hand, we present an $O(N\log N)$-time heuristic for the restriction of this problem to instances with $O(1)$ trees of height $O(1)$, yielding solutions within a constant factor of the optimum. We prove that the maximum inferred consensus tree problem is NP-complete and we provide a simple fast heuristic for it, yielding solutions within one third of the optimum. We also present a more specialized polynomial-time heuristic for the maximum inferred local consensus tree problem.
Export
BibTeX
@techreport{GasieniecJanssonLingasOstlin96, TITLE = {On the complexity of computing evolutionary trees}, AUTHOR = {Gasieniec, Leszek and Jansson, Jesper and Lingas, Andrzej and {\"O}stlin, Anna}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-031}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {In this paper we study a few important tree optimization problems with applications to computational biology. These problems ask for trees that are consistent with an as large part of the given data as possible. We show that the maximum homeomorphic agreement subtree problem cannot be approximated within a factor of $N^{\epsilon}$, where $N$ is the input size, for any $0 \leq \epsilon < \frac{1}{18}$ in polynomial time, unless P=NP. On the other hand, we present an $O(N\log N)$-time heuristic for the restriction of this problem to instances with $O(1)$ trees of height $O(1)$, yielding solutions within a constant factor of the optimum. We prove that the maximum inferred consensus tree problem is NP-complete and we provide a simple fast heuristic for it, yielding solutions within one third of the optimum. We also present a more specialized polynomial-time heuristic for the maximum inferred local consensus tree problem.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Gasieniec, Leszek %A Jansson, Jesper %A Lingas, Andrzej %A &#214;stlin, Anna %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T On the complexity of computing evolutionary trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A01E-5 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 14 p. %X In this paper we study a few important tree optimization problems with applications to computational biology. These problems ask for trees that are consistent with an as large part of the given data as possible. We show that the maximum homeomorphic agreement subtree problem cannot be approximated within a factor of $N^{\epsilon}$, where $N$ is the input size, for any $0 \leq \epsilon < \frac{1}{18}$ in polynomial time, unless P=NP. On the other hand, we present an $O(N\log N)$-time heuristic for the restriction of this problem to instances with $O(1)$ trees of height $O(1)$, yielding solutions within a constant factor of the optimum. We prove that the maximum inferred consensus tree problem is NP-complete and we provide a simple fast heuristic for it, yielding solutions within one third of the optimum. We also present a more specialized polynomial-time heuristic for the maximum inferred local consensus tree problem. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[174]
L. Gasieniec, P. Indyk, and P. Krysta, “External inverse pattern matching,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-030, 1996.
Abstract
We consider {\sl external inverse pattern matching} problem. Given a text $\t$ of length $n$ over an ordered alphabet $\Sigma$, such that $|\Sigma|=\sigma$, and a number $m\le n$. The entire problem is to find a pattern $\pe\in \Sigma^m$ which is not a subword of $\t$ and which maximizes the sum of Hamming distances between $\pe$ and all subwords of $\t$ of length $m$. We present optimal $O(n\log\sigma)$-time algorithm for the external inverse pattern matching problem which substantially improves the only known polynomial $O(nm\log\sigma)$-time algorithm introduced by Amir, Apostolico and Lewenstein. Moreover we discuss a fast parallel implementation of our algorithm on the CREW PRAM model.
Export
BibTeX
@techreport{GasieniecIndykKrysta96, TITLE = {External inverse pattern matching}, AUTHOR = {Gasieniec, Leszek and Indyk, Piotr and Krysta, Piotr}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-030}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We consider {\sl external inverse pattern matching} problem. Given a text $\t$ of length $n$ over an ordered alphabet $\Sigma$, such that $|\Sigma|=\sigma$, and a number $m\le n$. The entire problem is to find a pattern $\pe\in \Sigma^m$ which is not a subword of $\t$ and which maximizes the sum of Hamming distances between $\pe$ and all subwords of $\t$ of length $m$. We present optimal $O(n\log\sigma)$-time algorithm for the external inverse pattern matching problem which substantially improves the only known polynomial $O(nm\log\sigma)$-time algorithm introduced by Amir, Apostolico and Lewenstein. Moreover we discuss a fast parallel implementation of our algorithm on the CREW PRAM model.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Gasieniec, Leszek %A Indyk, Piotr %A Krysta, Piotr %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T External inverse pattern matching : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A410-3 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 12 p. %X We consider {\sl external inverse pattern matching} problem. Given a text $\t$ of length $n$ over an ordered alphabet $\Sigma$, such that $|\Sigma|=\sigma$, and a number $m\le n$. The entire problem is to find a pattern $\pe\in \Sigma^m$ which is not a subword of $\t$ and which maximizes the sum of Hamming distances between $\pe$ and all subwords of $\t$ of length $m$. We present optimal $O(n\log\sigma)$-time algorithm for the external inverse pattern matching problem which substantially improves the only known polynomial $O(nm\log\sigma)$-time algorithm introduced by Amir, Apostolico and Lewenstein. Moreover we discuss a fast parallel implementation of our algorithm on the CREW PRAM model. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[175]
D. Gunopulos, H. Mannila, and S. Saluja, “Discovering all most specific sentences by randomized algorithms,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-023, 1996.
Abstract
Data mining can in many instances be viewed as the task of computing a representation of a theory of a model or of a database. In this paper we present a randomized algorithm that can be used to compute the representation of a theory in terms of the most specific sentences of that theory. In addition to randomization, the algorithm uses a generalization of the concept of hypergraph transversals. We apply the general algorithm in two ways, for the problem of discovering maximal frequent sets in 0/1 data, and for computing minimal keys in relations. We present some empirical results on the performance of these methods on real data. We also show some complexity theoretic evidence of the hardness of these problems.
Export
BibTeX
@techreport{GunopulosMannilaSaluja96, TITLE = {Discovering all most specific sentences by randomized algorithms}, AUTHOR = {Gunopulos, Dimitrios and Mannila, Heikki and Saluja, Sanjeev}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-023}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {Data mining can in many instances be viewed as the task of computing a representation of a theory of a model or of a database. In this paper we present a randomized algorithm that can be used to compute the representation of a theory in terms of the most specific sentences of that theory. In addition to randomization, the algorithm uses a generalization of the concept of hypergraph transversals. We apply the general algorithm in two ways, for the problem of discovering maximal frequent sets in 0/1 data, and for computing minimal keys in relations. We present some empirical results on the performance of these methods on real data. We also show some complexity theoretic evidence of the hardness of these problems.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Gunopulos, Dimitrios %A Mannila, Heikki %A Saluja, Sanjeev %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Discovering all most specific sentences by randomized algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A109-B %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 23 p. %X Data mining can in many instances be viewed as the task of computing a representation of a theory of a model or of a database. In this paper we present a randomized algorithm that can be used to compute the representation of a theory in terms of the most specific sentences of that theory. In addition to randomization, the algorithm uses a generalization of the concept of hypergraph transversals. We apply the general algorithm in two ways, for the problem of discovering maximal frequent sets in 0/1 data, and for computing minimal keys in relations. We present some empirical results on the performance of these methods on real data. We also show some complexity theoretic evidence of the hardness of these problems. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[176]
P. Gupta, R. Janardan, and M. Smid, “Efficient algorithms for counting and reporting pairwise intersections between convex polygons,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-008, 1996.
Export
BibTeX
@techreport{GuptaJanardanSmid96a, TITLE = {Efficient algorithms for counting and reporting pairwise intersections between convex polygons}, AUTHOR = {Gupta, Prosenjit and Janardan, Ravi and Smid, Michiel}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-008}, NUMBER = {MPI-I-1996-1-008}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Gupta, Prosenjit %A Janardan, Ravi %A Smid, Michiel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Efficient algorithms for counting and reporting pairwise intersections between convex polygons : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A19E-0 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-008 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 11 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[177]
P. Gupta, R. Janardan, and M. Smid, “A technique for adding range restrictions to generalized searching problems,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-017, 1996.
Abstract
In a generalized searching problem, a set $S$ of $n$ colored geometric objects has to be stored in a data structure, such that for any given query object $q$, the distinct colors of the objects of $S$ intersected by $q$ can be reported efficiently. In this paper, a general technique is presented for adding a range restriction to such a problem. The technique is applied to the problem of querying a set of colored points (resp.\ fat triangles) with a fat triangle (resp.\ point). For both problems, a data structure is obtained having size $O(n^{1+\epsilon})$ and query time $O((\log n)^2 + C)$. Here, $C$ denotes the number of colors reported by the query, and $\epsilon$ is an arbitrarily small positive constant.
Export
BibTeX
@techreport{GuptaJanardanSmid96b, TITLE = {A technique for adding range restrictions to generalized searching problems}, AUTHOR = {Gupta, Prosenjit and Janardan, Ravi and Smid, Michiel}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-017}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {In a generalized searching problem, a set $S$ of $n$ colored geometric objects has to be stored in a data structure, such that for any given query object $q$, the distinct colors of the objects of $S$ intersected by $q$ can be reported efficiently. In this paper, a general technique is presented for adding a range restriction to such a problem. The technique is applied to the problem of querying a set of colored points (resp.\ fat triangles) with a fat triangle (resp.\ point). For both problems, a data structure is obtained having size $O(n^{1+\epsilon})$ and query time $O((\log n)^2 + C)$. Here, $C$ denotes the number of colors reported by the query, and $\epsilon$ is an arbitrarily small positive constant.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Gupta, Prosenjit %A Janardan, Ravi %A Smid, Michiel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A technique for adding range restrictions to generalized searching problems : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A15E-F %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 9 p. %X In a generalized searching problem, a set $S$ of $n$ colored geometric objects has to be stored in a data structure, such that for any given query object $q$, the distinct colors of the objects of $S$ intersected by $q$ can be reported efficiently. In this paper, a general technique is presented for adding a range restriction to such a problem. The technique is applied to the problem of querying a set of colored points (resp.\ fat triangles) with a fat triangle (resp.\ point). For both problems, a data structure is obtained having size $O(n^{1+\epsilon})$ and query time $O((\log n)^2 + C)$. Here, $C$ denotes the number of colors reported by the query, and $\epsilon$ is an arbitrarily small positive constant. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[178]
T. Hagerup, “Vorlesungsskript Komplexitätstheorie,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-96-1-005, 1996.
Export
BibTeX
@techreport{MPI-I-96-1-005, TITLE = {Vorlesungsskript Komplexit{\"a}tstheorie}, AUTHOR = {Hagerup, Torben}, LANGUAGE = {eng}, NUMBER = {MPI-I-96-1-005}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Hagerup, Torben %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Vorlesungsskript Komplexit&#228;tstheorie : %G eng %U http://hdl.handle.net/21.11116/0000-0001-6AB6-B %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 156 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[179]
M. Jünger and P. Mutzel, “2-Layer straigthline crossing minimization: performance of exact and heuristic algorithms,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-025, 1996.
Abstract
We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions.
Export
BibTeX
@techreport{JungerMutzel96, TITLE = {2-Layer straigthline crossing minimization: performance of exact and heuristic algorithms}, AUTHOR = {J{\"u}nger, Michael and Mutzel, Petra}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-025}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A J&#252;nger, Michael %A Mutzel, Petra %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T 2-Layer straigthline crossing minimization: performance of exact and heuristic algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A040-6 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 14 p. %X We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[180]
S. Mahajan and R. Hariharan, “Derandomizing semidefinite programming based approximation algorithms,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-013, 1996.
Export
BibTeX
@techreport{MahajanRamesh96, TITLE = {Derandomizing semidefinite programming based approximation algorithms}, AUTHOR = {Mahajan, Sanjeev and Hariharan, Ramesh}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-013}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Mahajan, Sanjeev %A Hariharan, Ramesh %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Derandomizing semidefinite programming based approximation algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A18F-1 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 22 p. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[181]
M. Mavronicolas, M. Papatriantafilou, and P. Tsigas, “The impact of timing on linearizability in counting networks,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-011, 1996.
Abstract
{\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,} which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested. Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support. In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18]. We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers. We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest. Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks.
Export
BibTeX
@techreport{MavronicolasPapatriantafilouTsigas96, TITLE = {The impact of timing on linearizability in counting networks}, AUTHOR = {Mavronicolas, Marios and Papatriantafilou, Marina and Tsigas, Philippas}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-011}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {{\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,} which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested. Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support. In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18]. We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers. We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest. Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Mavronicolas, Marios %A Papatriantafilou, Marina %A Tsigas, Philippas %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The impact of timing on linearizability in counting networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A195-2 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 19 p. %X {\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,} which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested. Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support. In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18]. We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers. We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest. Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[182]
K. Mehlhorn, S. Näher, S. Schirra, M. Seel, and C. Uhrig, “A computational basis for higher-dimensional computational geometry,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-016, 1996.
Abstract
We specify and implement a kernel for computational geometry in arbitrary finite dimensional space. The kernel provides points, vectors, directions, hyperplanes, segments, rays, lines, affine transformations, and operations connecting these types. Points have rational coordinates, hyperplanes have rational coefficients, and analogous statements hold for the other types. We therefore call our types \emph{rat\_point}, \emph{rat\_vector}, \emph{rat\_direction}, \emph{rat\_hyperplane}, \emph{rat\_segment}, \emph{rat\_ray} and \emph{rat\_line}. All geometric primitives are \emph{exact}, i.e., they do not incur rounding error (because they are implemented using rational arithmetic) and always produce the correct result. To this end we provide types \emph{integer\_vector} and \emph{integer\_matrix} which realize exact linear algebra over the integers. The kernel is submitted to the CGAL-Consortium as a proposal for its higher-dimensional geometry kernel and will become part of the LEDA platform for combinatorial and geometric computing.
Export
BibTeX
@techreport{MehlhornNaherSchirraSeelUhrig96, TITLE = {A computational basis for higher-dimensional computational geometry}, AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan and Schirra, Stefan and Seel, Michael and Uhrig, Christian}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-016}, NUMBER = {MPI-I-1996-1-016}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We specify and implement a kernel for computational geometry in arbitrary finite dimensional space. The kernel provides points, vectors, directions, hyperplanes, segments, rays, lines, affine transformations, and operations connecting these types. Points have rational coordinates, hyperplanes have rational coefficients, and analogous statements hold for the other types. We therefore call our types \emph{rat\_point}, \emph{rat\_vector}, \emph{rat\_direction}, \emph{rat\_hyperplane}, \emph{rat\_segment}, \emph{rat\_ray} and \emph{rat\_line}. All geometric primitives are \emph{exact}, i.e., they do not incur rounding error (because they are implemented using rational arithmetic) and always produce the correct result. To this end we provide types \emph{integer\_vector} and \emph{integer\_matrix} which realize exact linear algebra over the integers. The kernel is submitted to the CGAL-Consortium as a proposal for its higher-dimensional geometry kernel and will become part of the LEDA platform for combinatorial and geometric computing.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Mehlhorn, Kurt %A N&#228;her, Stefan %A Schirra, Stefan %A Seel, Michael %A Uhrig, Christian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A computational basis for higher-dimensional computational geometry : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A163-1 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-016 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 120 p. %X We specify and implement a kernel for computational geometry in arbitrary finite dimensional space. The kernel provides points, vectors, directions, hyperplanes, segments, rays, lines, affine transformations, and operations connecting these types. Points have rational coordinates, hyperplanes have rational coefficients, and analogous statements hold for the other types. We therefore call our types \emph{rat\_point}, \emph{rat\_vector}, \emph{rat\_direction}, \emph{rat\_hyperplane}, \emph{rat\_segment}, \emph{rat\_ray} and \emph{rat\_line}. All geometric primitives are \emph{exact}, i.e., they do not incur rounding error (because they are implemented using rational arithmetic) and always produce the correct result. To this end we provide types \emph{integer\_vector} and \emph{integer\_matrix} which realize exact linear algebra over the integers. The kernel is submitted to the CGAL-Consortium as a proposal for its higher-dimensional geometry kernel and will become part of the LEDA platform for combinatorial and geometric computing. %B Research Report
[183]
P. Mutzel, T. Odenthal, and M. Scharbrodt, “The thickness of graphs: a survey,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-009, 1996.
Abstract
We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant of a graph, we deal with practical computation of the thickness. We present some modifications of a basic heuristic and investigate their usefulness for evaluating the thickness and determining a decomposition of a graph in planar subgraphs.
Export
BibTeX
@techreport{MutzelOdenthalScharbrodt96, TITLE = {The thickness of graphs: a survey}, AUTHOR = {Mutzel, Petra and Odenthal, Thomas and Scharbrodt, Mark}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-009}, NUMBER = {MPI-I-1996-1-009}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant of a graph, we deal with practical computation of the thickness. We present some modifications of a basic heuristic and investigate their usefulness for evaluating the thickness and determining a decomposition of a graph in planar subgraphs.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Mutzel, Petra %A Odenthal, Thomas %A Scharbrodt, Mark %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T The thickness of graphs: a survey : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A19B-5 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-009 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 18 p. %X We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant of a graph, we deal with practical computation of the thickness. We present some modifications of a basic heuristic and investigate their usefulness for evaluating the thickness and determining a decomposition of a graph in planar subgraphs. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[184]
K. Reinert, H.-P. Lenhof, P. Mutzel, K. Mehlhorn, and J. Kececioglou, “A branch-and-cut algorithm for multiple sequence alignment,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-028, 1996.
Abstract
Multiple sequence alignment is an important problem in computational biology. We study the Maximum Trace formulation introduced by Kececioglu~\cite{Kececioglu91}. We first phrase the problem in terms of forbidden subgraphs, which enables us to express Maximum Trace as an integer linear-programming problem, and then solve the integer linear program using methods from polyhedral combinatorics. The trace {\it polytope\/} is the convex hull of all feasible solutions to the Maximum Trace problem; for the case of two sequences, we give a complete characterization of this polytope. This yields a polynomial-time algorithm for a general version of pairwise sequence alignment that, perhaps suprisingly, does not use dynamic programming; this yields, for instance, a non-dynamic-programming algorithm for sequence comparison under the 0-1 metric, which gives another answer to a long-open question in the area of string algorithms \cite{PW93}. For the multiple-sequence case, we derive several classes of facet-defining inequalities and show that for all but one class, the corresponding separation problem can be solved in polynomial time. This leads to a branch-and-cut algorithm for multiple sequence alignment, and we report on our first computational experience. It appears that a polyhedral approach to multiple sequence alignment can solve instances that are beyond present dynamic-programming approaches.
Export
BibTeX
@techreport{ReinertLenhofMutzelMehlhornKececioglou96, TITLE = {A branch-and-cut algorithm for multiple sequence alignment}, AUTHOR = {Reinert, Knut and Lenhof, Hans-Peter and Mutzel, Petra and Mehlhorn, Kurt and Kececioglou, John}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-028}, NUMBER = {MPI-I-1996-1-028}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {Multiple sequence alignment is an important problem in computational biology. We study the Maximum Trace formulation introduced by Kececioglu~\cite{Kececioglu91}. We first phrase the problem in terms of forbidden subgraphs, which enables us to express Maximum Trace as an integer linear-programming problem, and then solve the integer linear program using methods from polyhedral combinatorics. The trace {\it polytope\/} is the convex hull of all feasible solutions to the Maximum Trace problem; for the case of two sequences, we give a complete characterization of this polytope. This yields a polynomial-time algorithm for a general version of pairwise sequence alignment that, perhaps suprisingly, does not use dynamic programming; this yields, for instance, a non-dynamic-programming algorithm for sequence comparison under the 0-1 metric, which gives another answer to a long-open question in the area of string algorithms \cite{PW93}. For the multiple-sequence case, we derive several classes of facet-defining inequalities and show that for all but one class, the corresponding separation problem can be solved in polynomial time. This leads to a branch-and-cut algorithm for multiple sequence alignment, and we report on our first computational experience. It appears that a polyhedral approach to multiple sequence alignment can solve instances that are beyond present dynamic-programming approaches.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Reinert, Knut %A Lenhof, Hans-Peter %A Mutzel, Petra %A Mehlhorn, Kurt %A Kececioglou, John %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A branch-and-cut algorithm for multiple sequence alignment : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A037-B %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-028 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 15 p. %X Multiple sequence alignment is an important problem in computational biology. We study the Maximum Trace formulation introduced by Kececioglu~\cite{Kececioglu91}. We first phrase the problem in terms of forbidden subgraphs, which enables us to express Maximum Trace as an integer linear-programming problem, and then solve the integer linear program using methods from polyhedral combinatorics. The trace {\it polytope\/} is the convex hull of all feasible solutions to the Maximum Trace problem; for the case of two sequences, we give a complete characterization of this polytope. This yields a polynomial-time algorithm for a general version of pairwise sequence alignment that, perhaps suprisingly, does not use dynamic programming; this yields, for instance, a non-dynamic-programming algorithm for sequence comparison under the 0-1 metric, which gives another answer to a long-open question in the area of string algorithms \cite{PW93}. For the multiple-sequence case, we derive several classes of facet-defining inequalities and show that for all but one class, the corresponding separation problem can be solved in polynomial time. This leads to a branch-and-cut algorithm for multiple sequence alignment, and we report on our first computational experience. It appears that a polyhedral approach to multiple sequence alignment can solve instances that are beyond present dynamic-programming approaches. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[185]
J. Rieger, “Proximity in arrangements of algebraic sets,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-003, 1996.
Abstract
Let $X$ be an arrangement of $n$ algebraic sets $X_i$ in $d$-space, where the $X_i$ are either parameterized or zero-sets of dimension $0\le m_i\le d-1$. We study a number of decompositions of $d$-space into connected regions in which the distance-squared function to $X$ has certain invariances. These decompositions can be used in the following of proximity problems: given some point, find the $k$ nearest sets $X_i$ in the arrangement, find the nearest point in $X$ or (assuming that $X$ is compact) find the farthest point in $X$ and hence the smallest enclosing $(d-1)$-sphere. We give bounds on the complexity of the decompositions in terms of $n$, $d$, and the degrees and dimensions of the algebraic sets $X_i$.
Export
BibTeX
@techreport{Rieger93, TITLE = {Proximity in arrangements of algebraic sets}, AUTHOR = {Rieger, Joachim}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-003}, NUMBER = {MPI-I-1996-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {Let $X$ be an arrangement of $n$ algebraic sets $X_i$ in $d$-space, where the $X_i$ are either parameterized or zero-sets of dimension $0\le m_i\le d-1$. We study a number of decompositions of $d$-space into connected regions in which the distance-squared function to $X$ has certain invariances. These decompositions can be used in the following of proximity problems: given some point, find the $k$ nearest sets $X_i$ in the arrangement, find the nearest point in $X$ or (assuming that $X$ is compact) find the farthest point in $X$ and hence the smallest enclosing $(d-1)$-sphere. We give bounds on the complexity of the decompositions in terms of $n$, $d$, and the degrees and dimensions of the algebraic sets $X_i$.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Rieger, Joachim %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Proximity in arrangements of algebraic sets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1A7-9 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-003 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 25 p. %X Let $X$ be an arrangement of $n$ algebraic sets $X_i$ in $d$-space, where the $X_i$ are either parameterized or zero-sets of dimension $0\le m_i\le d-1$. We study a number of decompositions of $d$-space into connected regions in which the distance-squared function to $X$ has certain invariances. These decompositions can be used in the following of proximity problems: given some point, find the $k$ nearest sets $X_i$ in the arrangement, find the nearest point in $X$ or (assuming that $X$ is compact) find the farthest point in $X$ and hence the smallest enclosing $(d-1)$-sphere. We give bounds on the complexity of the decompositions in terms of $n$, $d$, and the degrees and dimensions of the algebraic sets $X_i$. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[186]
S. Saluja and P. Gupta, “Optimal algorithms for some proximity problems on the Gaussian sphere with applications,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-022, 1996.
Abstract
We consider some geometric problems on the unit sphere which arise in $NC$-machining. Optimal linear time algorithms are given for these problems using linear and quadratic programming in three dimensions.
Export
BibTeX
@techreport{SalujaGupta96, TITLE = {Optimal algorithms for some proximity problems on the Gaussian sphere with applications}, AUTHOR = {Saluja, Sanjeev and Gupta, Prosenjit}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-022}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We consider some geometric problems on the unit sphere which arise in $NC$-machining. Optimal linear time algorithms are given for these problems using linear and quadratic programming in three dimensions.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Saluja, Sanjeev %A Gupta, Prosenjit %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Optimal algorithms for some proximity problems on the Gaussian sphere with applications : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A413-E %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 8 p. %X We consider some geometric problems on the unit sphere which arise in $NC$-machining. Optimal linear time algorithms are given for these problems using linear and quadratic programming in three dimensions. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[187]
M. Seel, “A runtime test of integer arithmetic and linear algebra in LEDA,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-033, 1996.
Abstract
In this Research Report we want to clarify the current efficiency of two LEDA software layers. We examine the runtime of the LEDA big integer number type |integer| and of the linear algebra classes |integer_matrix| and |integer_vector|.
Export
BibTeX
@techreport{Seel97, TITLE = {A runtime test of integer arithmetic and linear algebra in {LEDA}}, AUTHOR = {Seel, Michael}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-033}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {In this Research Report we want to clarify the current efficiency of two LEDA software layers. We examine the runtime of the LEDA big integer number type |integer| and of the linear algebra classes |integer_matrix| and |integer_vector|.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Seel, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A runtime test of integer arithmetic and linear algebra in LEDA : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A01B-B %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 10 p. %X In this Research Report we want to clarify the current efficiency of two LEDA software layers. We examine the runtime of the LEDA big integer number type |integer| and of the linear algebra classes |integer_matrix| and |integer_vector|. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[188]
J. F. Sibeyn, P. S. Rao, and B. H. H. Juurlink, “Gossiping on meshes and tori,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-018, 1996.
Abstract
Algorithms for performing gossiping on one- and higher dimensional meshes are presented. As a routing model, we assume the practically important worm-hole routing. For one-dimensional arrays and rings, we give a novel lower bound and an asymptotically optimal gossiping algorithm for all choices of the parameters involved. For two-dimensional meshes and tori, several simple algorithms composed of one-dimensional phases are presented. For an important range of packet and mesh sizes it gives clear improvements upon previously developed algorithms. The algorithm is analyzed theoretically, and the achieved improvements are also convincingly demonstrated by simulations and by an implementation on the Paragon. For example, on a Paragon with $81$ processors and messages of size 32 KB, relying on the built-in router requires $716$ milliseconds, while our algorithm requires only $79$ milliseconds. For higher dimensional meshes, we give algorithms which are based on a generalized notion of a diagonal. These are analyzed theoretically and by simulation.
Export
BibTeX
@techreport{SibeynRaoJuurlink96, TITLE = {Gossiping on meshes and tori}, AUTHOR = {Sibeyn, Jop Frederic and Rao, P. Srinivasa and Juurlink, Ben H. H.}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-018}, NUMBER = {MPI-I-1996-1-018}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {Algorithms for performing gossiping on one- and higher dimensional meshes are presented. As a routing model, we assume the practically important worm-hole routing. For one-dimensional arrays and rings, we give a novel lower bound and an asymptotically optimal gossiping algorithm for all choices of the parameters involved. For two-dimensional meshes and tori, several simple algorithms composed of one-dimensional phases are presented. For an important range of packet and mesh sizes it gives clear improvements upon previously developed algorithms. The algorithm is analyzed theoretically, and the achieved improvements are also convincingly demonstrated by simulations and by an implementation on the Paragon. For example, on a Paragon with $81$ processors and messages of size 32 KB, relying on the built-in router requires $716$ milliseconds, while our algorithm requires only $79$ milliseconds. For higher dimensional meshes, we give algorithms which are based on a generalized notion of a diagonal. These are analyzed theoretically and by simulation.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Sibeyn, Jop Frederic %A Rao, P. Srinivasa %A Juurlink, Ben H. H. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Gossiping on meshes and tori : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A15A-8 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-018 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 19 p. %X Algorithms for performing gossiping on one- and higher dimensional meshes are presented. As a routing model, we assume the practically important worm-hole routing. For one-dimensional arrays and rings, we give a novel lower bound and an asymptotically optimal gossiping algorithm for all choices of the parameters involved. For two-dimensional meshes and tori, several simple algorithms composed of one-dimensional phases are presented. For an important range of packet and mesh sizes it gives clear improvements upon previously developed algorithms. The algorithm is analyzed theoretically, and the achieved improvements are also convincingly demonstrated by simulations and by an implementation on the Paragon. For example, on a Paragon with $81$ processors and messages of size 32 KB, relying on the built-in router requires $716$ milliseconds, while our algorithm requires only $79$ milliseconds. For higher dimensional meshes, we give algorithms which are based on a generalized notion of a diagonal. These are analyzed theoretically and by simulation. %B Research Report
[189]
J. L. Träff and C. Zaroliagis, “A simple parallel algorithm for the single-source shortest path problem on planar diagraphs,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-012, 1996.
Abstract
We present a simple parallel algorithm for the {\em single-source shortest path problem} in {\em planar digraphs} with nonnegative real edge weights. The algorithm runs on the EREW PRAM model of parallel computation in $O((n^{2\epsilon} + n^{1-\epsilon})\log n)$ time, performing $O(n^{1+\epsilon}\log n)$ work for any $0<\epsilon<1/2$. The strength of the algorithm is its simplicity, making it easy to implement, and presumably 474 quite efficient in practice. The algorithm improves upon the work of all previous algorithms. The work can be further reduced to $O(n^{1+\epsilon})$, by plugging in a less practical, sequential planar shortest path algorithm. Our algorithm is based on a region decomposition of the input graph, and uses a well-known parallel implementation of Dijkstra's algorithm.
Export
BibTeX
@techreport{TraffZaroliagis96, TITLE = {A simple parallel algorithm for the single-source shortest path problem on planar diagraphs}, AUTHOR = {Tr{\"a}ff, Jesper Larsson and Zaroliagis, Christos}, LANGUAGE = {eng}, NUMBER = {MPI-I-1996-1-012}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {We present a simple parallel algorithm for the {\em single-source shortest path problem} in {\em planar digraphs} with nonnegative real edge weights. The algorithm runs on the EREW PRAM model of parallel computation in $O((n^{2\epsilon} + n^{1-\epsilon})\log n)$ time, performing $O(n^{1+\epsilon}\log n)$ work for any $0<\epsilon<1/2$. The strength of the algorithm is its simplicity, making it easy to implement, and presumably 474 quite efficient in practice. The algorithm improves upon the work of all previous algorithms. The work can be further reduced to $O(n^{1+\epsilon})$, by plugging in a less practical, sequential planar shortest path algorithm. Our algorithm is based on a region decomposition of the input graph, and uses a well-known parallel implementation of Dijkstra's algorithm.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Tr&#228;ff, Jesper Larsson %A Zaroliagis, Christos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A simple parallel algorithm for the single-source shortest path problem on planar diagraphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A192-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 17 p. %X We present a simple parallel algorithm for the {\em single-source shortest path problem} in {\em planar digraphs} with nonnegative real edge weights. The algorithm runs on the EREW PRAM model of parallel computation in $O((n^{2\epsilon} + n^{1-\epsilon})\log n)$ time, performing $O(n^{1+\epsilon}\log n)$ work for any $0<\epsilon<1/2$. The strength of the algorithm is its simplicity, making it easy to implement, and presumably 474 quite efficient in practice. The algorithm improves upon the work of all previous algorithms. The work can be further reduced to $O(n^{1+\epsilon})$, by plugging in a less practical, sequential planar shortest path algorithm. Our algorithm is based on a region decomposition of the input graph, and uses a well-known parallel implementation of Dijkstra's algorithm. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[190]
M. Vingron, H.-P. Lenhof, and P. Mutzel, “Computational Molecular Biology,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1996-1-015, 1996.
Abstract
Computational Biology is a fairly new subject that arose in response to the computational problems posed by the analysis and the processing of biomolecular sequence and structure data. The field was initiated in the late 60's and early 70's largely by pioneers working in the life sciences. Physicists and mathematicians entered the field in the 70's and 80's, while Computer Science became involved with the new biological problems in the late 1980's. Computational problems have gained further importance in molecular biology through the various genome projects which produce enormous amounts of data. For this bibliography we focus on those areas of computational molecular biology that involve discrete algorithms or discrete optimization. We thus neglect several other areas of computational molecular biology, like most of the literature on the protein folding problem, as well as databases for molecular and genetic data, and genetic mapping algorithms. Due to the availability of review papers and a bibliography this bibliography.
Export
BibTeX
@techreport{VingronLenhofMutzel96, TITLE = {Computational Molecular Biology}, AUTHOR = {Vingron, M. and Lenhof, Hans-Peter and Mutzel, Petra}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-015}, NUMBER = {MPI-I-1996-1-015}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1996}, DATE = {1996}, ABSTRACT = {Computational Biology is a fairly new subject that arose in response to the computational problems posed by the analysis and the processing of biomolecular sequence and structure data. The field was initiated in the late 60's and early 70's largely by pioneers working in the life sciences. Physicists and mathematicians entered the field in the 70's and 80's, while Computer Science became involved with the new biological problems in the late 1980's. Computational problems have gained further importance in molecular biology through the various genome projects which produce enormous amounts of data. For this bibliography we focus on those areas of computational molecular biology that involve discrete algorithms or discrete optimization. We thus neglect several other areas of computational molecular biology, like most of the literature on the protein folding problem, as well as databases for molecular and genetic data, and genetic mapping algorithms. Due to the availability of review papers and a bibliography this bibliography.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Vingron, M. %A Lenhof, Hans-Peter %A Mutzel, Petra %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Computational Molecular Biology : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A188-F %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-015 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1996 %P 26 p. %X Computational Biology is a fairly new subject that arose in response to the computational problems posed by the analysis and the processing of biomolecular sequence and structure data. The field was initiated in the late 60's and early 70's largely by pioneers working in the life sciences. Physicists and mathematicians entered the field in the 70's and 80's, while Computer Science became involved with the new biological problems in the late 1980's. Computational problems have gained further importance in molecular biology through the various genome projects which produce enormous amounts of data. For this bibliography we focus on those areas of computational molecular biology that involve discrete algorithms or discrete optimization. We thus neglect several other areas of computational molecular biology, like most of the literature on the protein folding problem, as well as databases for molecular and genetic data, and genetic mapping algorithms. Due to the availability of review papers and a bibliography this bibliography. %B Research Report
1995
[191]
A. Andersson, S. Nilsson, T. Hagerup, and R. Raman, “Sorting in linear time?,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1995-1-024, 1995.
Abstract
We show that a unit-cost RAM with a word length of $w$ bits can sort $n$ integers in the range $0\Ttwodots 2^w-1$ in $O(n\log\log n)$ time, for arbitrary $w\ge\log n$, a significant improvement over the bound of $O(n\sqrt{\log n})$ achieved by the fusion trees of Fredman and Willard. Provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$, the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unit-cost PRAM with a word length of $w$ bits. The first one yields an algorithm that uses $O(\log n)$ time and\break $O(n\log\log n)$ operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses $O(\log n)$ expected time and $O(n)$ expected operations on a randomized EREW PRAM, provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$. Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multiple-precision integers represented in several words.
Export
BibTeX
@techreport{AnderssonNilssonHagerupRaman95, TITLE = {Sorting in linear time?}, AUTHOR = {Andersson, A. and Nilsson, S. and Hagerup, Torben and Raman, Rajeev}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-024}, NUMBER = {MPI-I-1995-1-024}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1995}, DATE = {1995}, ABSTRACT = {We show that a unit-cost RAM with a word length of $w$ bits can sort $n$ integers in the range $0\Ttwodots 2^w-1$ in $O(n\log\log n)$ time, for arbitrary $w\ge\log n$, a significant improvement over the bound of $O(n\sqrt{\log n})$ achieved by the fusion trees of Fredman and Willard. Provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$, the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unit-cost PRAM with a word length of $w$ bits. The first one yields an algorithm that uses $O(\log n)$ time and\break $O(n\log\log n)$ operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses $O(\log n)$ expected time and $O(n)$ expected operations on a randomized EREW PRAM, provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$. Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multiple-precision integers represented in several words.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Andersson, A. %A Nilsson, S. %A Hagerup, Torben %A Raman, Rajeev %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sorting in linear time? : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1DE-D %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-024 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1995 %P 32 p. %X We show that a unit-cost RAM with a word length of $w$ bits can sort $n$ integers in the range $0\Ttwodots 2^w-1$ in $O(n\log\log n)$ time, for arbitrary $w\ge\log n$, a significant improvement over the bound of $O(n\sqrt{\log n})$ achieved by the fusion trees of Fredman and Willard. Provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$, the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unit-cost PRAM with a word length of $w$ bits. The first one yields an algorithm that uses $O(\log n)$ time and\break $O(n\log\log n)$ operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses $O(\log n)$ expected time and $O(n)$ expected operations on a randomized EREW PRAM, provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$. Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multiple-precision integers represented in several words. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[192]
S. R. Arikati, A. Maheshwari, and C. Zaroliagis, “Efficient computation of implicit representations of sparse graphs (revised version),” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1995-1-013, 1995.
Abstract
The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on $n$ vertices using $O(n)$ space such that vertex adjacency is tested in $O(1)$ time. We show here how to construct such a representation efficiently by providing simple and optimal algorithms, both in a sequential and a parallel setting. Our sequential algorithm runs in $O(n)$ time. The parallel algorithm runs in $O(\log n)$ time using $O(n/{\log n})$ CRCW PRAM processors, or in $O(\log n\log^*n)$ time using $O(n/\log n\log^*n)$ EREW PRAM processors. Previous results for this problem are based on matroid partitioning and thus have a high complexity.
Export
BibTeX
@techreport{ArikatiMaheshwariZaroliagis95, TITLE = {Efficient computation of implicit representations of sparse graphs (revised version)}, AUTHOR = {Arikati, Srinivasa R. and Maheshwari, Anil and Zaroliagis, Christos}, LANGUAGE = {eng}, NUMBER = {MPI-I-1995-1-013}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1995}, DATE = {1995}, ABSTRACT = {The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on $n$ vertices using $O(n)$ space such that vertex adjacency is tested in $O(1)$ time. We show here how to construct such a representation efficiently by providing simple and optimal algorithms, both in a sequential and a parallel setting. Our sequential algorithm runs in $O(n)$ time. The parallel algorithm runs in $O(\log n)$ time using $O(n/{\log n})$ CRCW PRAM processors, or in $O(\log n\log^*n)$ time using $O(n/\log n\log^*n)$ EREW PRAM processors. Previous results for this problem are based on matroid partitioning and thus have a high complexity.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Arikati, Srinivasa R. %A Maheshwari, Anil %A Zaroliagis, Christos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Efficient computation of implicit representations of sparse graphs (revised version) : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A704-1 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1995 %P 16 p. %X The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on $n$ vertices using $O(n)$ space such that vertex adjacency is tested in $O(1)$ time. We show here how to construct such a representation efficiently by providing simple and optimal algorithms, both in a sequential and a parallel setting. Our sequential algorithm runs in $O(n)$ time. The parallel algorithm runs in $O(\log n)$ time using $O(n/{\log n})$ CRCW PRAM processors, or in $O(\log n\log^*n)$ time using $O(n/\log n\log^*n)$ EREW PRAM processors. Previous results for this problem are based on matroid partitioning and thus have a high complexity. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[193]
H. L. Bodlaender and T. Hagerup, “Parallel Algorithms with Optimal Speedup for Bounded Treewidth,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-95-1-017, 1995.
Abstract
We describe the first parallel algorithm with optimal speedup for constructing minimum-width tree decompositions of graphs of bounded treewidth. On $n$-vertex input graphs, the algorithm works in $O((\log n)^2)$ time using $O(n)$ operations on the EREW PRAM. We also give faster parallel algorithms with optimal speedup for the problem of deciding whether the treewidth of an input graph is bounded by a given constant and for a variety of problems on graphs of bounded treewidth, including all decision problems expressible in monadic second-order logic. On $n$-vertex input graphs, the algorithms use $O(n)$ operations together with $O(\log n\Tlogstar n)$ time on the EREW PRAM, or $O(\log n)$ time on the CRCW PRAM.
Export
BibTeX
@techreport{Bodlaender-Hagerup95, TITLE = {Parallel Algorithms with Optimal Speedup for Bounded Treewidth}, AUTHOR = {Bodlaender, Hans L. and Hagerup, Torben}, LANGUAGE = {eng}, NUMBER = {MPI-I-95-1-017}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1995}, DATE = {1995}, ABSTRACT = {We describe the first parallel algorithm with optimal speedup for constructing minimum-width tree decompositions of graphs of bounded treewidth. On $n$-vertex input graphs, the algorithm works in $O((\log n)^2)$ time using $O(n)$ operations on the EREW PRAM. We also give faster parallel algorithms with optimal speedup for the problem of deciding whether the treewidth of an input graph is bounded by a given constant and for a variety of problems on graphs of bounded treewidth, including all decision problems expressible in monadic second-order logic. On $n$-vertex input graphs, the algorithms use $O(n)$ operations together with $O(\log n\Tlogstar n)$ time on the EREW PRAM, or $O(\log n)$ time on the CRCW PRAM.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Bodlaender, Hans L. %A Hagerup, Torben %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Parallel Algorithms with Optimal Speedup for Bounded Treewidth : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0019-DBA6-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1995 %X We describe the first parallel algorithm with optimal speedup for constructing minimum-width tree decompositions of graphs of bounded treewidth. On $n$-vertex input graphs, the algorithm works in $O((\log n)^2)$ time using $O(n)$ operations on the EREW PRAM. We also give faster parallel algorithms with optimal speedup for the problem of deciding whether the treewidth of an input graph is bounded by a given constant and for a variety of problems on graphs of bounded treewidth, including all decision problems expressible in monadic second-order logic. On $n$-vertex input graphs, the algorithms use $O(n)$ operations together with $O(\log n\Tlogstar n)$ time on the EREW PRAM, or $O(\log n)$ time on the CRCW PRAM. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[194]
P. G. Bradford, “Matching nuts and bolts optimally,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1995-1-025, 1995.
Abstract
The nuts and bolts problem is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts of distinct sizes such that for each nut there is exactly one matching bolt, find for each nut its corresponding bolt subject to the restriction that we can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem quite difficult to solve. In this paper, we illustrate the existence of an algorithm for solving the nuts and bolts problem that makes $O(n \lg n)$ nut-and-bolt comparisons. We show the existence of this algorithm by showing the existence of certain expander-based comparator networks. Our algorithm is asymptotically optimal in terms of the number of nut-and-bolt comparisons it does. Another view of this result is that we show the existence of a decision tree with depth $O(n \lg n)$ that solves this problem.
Export
BibTeX
@techreport{Bradford95, TITLE = {Matching nuts and bolts optimally}, AUTHOR = {Bradford, Phillip G.}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-025}, NUMBER = {MPI-I-1995-1-025}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1995}, DATE = {1995}, ABSTRACT = {The nuts and bolts problem is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts of distinct sizes such that for each nut there is exactly one matching bolt, find for each nut its corresponding bolt subject to the restriction that we can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem quite difficult to solve. In this paper, we illustrate the existence of an algorithm for solving the nuts and bolts problem that makes $O(n \lg n)$ nut-and-bolt comparisons. We show the existence of this algorithm by showing the existence of certain expander-based comparator networks. Our algorithm is asymptotically optimal in terms of the number of nut-and-bolt comparisons it does. Another view of this result is that we show the existence of a decision tree with depth $O(n \lg n)$ that solves this problem.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Bradford, Phillip G. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Matching nuts and bolts optimally : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1DB-4 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-025 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1995 %P 24 p. %X The nuts and bolts problem is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts of distinct sizes such that for each nut there is exactly one matching bolt, find for each nut its corresponding bolt subject to the restriction that we can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem quite difficult to solve. In this paper, we illustrate the existence of an algorithm for solving the nuts and bolts problem that makes $O(n \lg n)$ nut-and-bolt comparisons. We show the existence of this algorithm by showing the existence of certain expander-based comparator networks. Our algorithm is asymptotically optimal in terms of the number of nut-and-bolt comparisons it does. Another view of this result is that we show the existence of a decision tree with depth $O(n \lg n)$ that solves this problem. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[195]
P. G. Bradford and V. Capoyleas, “Weak epsilon-nets for points on a hypersphere,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1995-1-029, 1995.
Abstract
We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions.
Export
BibTeX
@techreport{BradfordCapoyleas95, TITLE = {Weak epsilon-nets for points on a hypersphere}, AUTHOR = {Bradford, Phillip G. and Capoyleas, Vasilis}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-029}, NUMBER = {MPI-I-1995-1-029}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1995}, DATE = {1995}, ABSTRACT = {We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions.}, TYPE = {Research Report}, }
Endnote
%0 Report %A Bradford, Phillip G. %A Capoyleas, Vasilis %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Weak epsilon-nets for points on a hypersphere : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A1CF-F %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-029 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1995 %P 8 p. %X We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions. %B Research Report
[196]
P. G. Bradford, R. Fleischer, and M. Smid, “A polylog-time and $O(n\sqrt\lg n)$-work parallel algorithm for finding the row minima in totally monotone matrices,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1995-1-006, 1995.
Abstract
We give a parallel algorithm for computing all row minima in a totally monotone $n\times n$ matrix which is simpler and more work efficient than previous polylog-time algorithms. It runs in $O(\lg n \lg\lg n)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CRCW}, in $O(\lg n (\lg\lg n)^2)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CREW}, and in $O(\lg n\sqrt{\lg n \lg\lg n})$ time doing $O(n\sqrt{\lg n\lg\lg n})$ work on an {\sf EREW}.
Export
BibTeX
@techreport{BradfordFleischerSmid95, TITLE = {A polylog-time and \$O(n{\textbackslash}sqrt{\textbackslash}lg n)\$-work parallel algorithm for finding the row minima in totally monotone matrices}, AUTHOR = {Bradford, Phillip Gnassi and Fleischer, Rudolf and Smid, Michiel}, LANGUAGE = {eng}, NUMBER = {MPI-I-1995-1-006}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1995}, DATE = {1995}, ABSTRACT = {We give a parallel algorithm for computing all row minima in a totally monotone $n\times n$ matrix which is simpler and more work efficient than previous polylog-time algorithms. It runs in $O(\lg n \lg\lg n)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CRCW}, in $O(\lg n (\lg\lg n)^2)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CREW}, and in $O(\lg n\sqrt{\lg n \lg\lg n})$ time doing $O(n\sqrt{\lg n\lg\lg n})$ work on an {\sf EREW}.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Bradford, Phillip Gnassi %A Fleischer, Rudolf %A Smid, Michiel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A polylog-time and $O(n\sqrt\lg n)$-work parallel algorithm for finding the row minima in totally monotone matrices : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A75F-8 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1995 %P 12 p. %X We give a parallel algorithm for computing all row minima in a totally monotone $n\times n$ matrix which is simpler and more work efficient than previous polylog-time algorithms. It runs in $O(\lg n \lg\lg n)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CRCW}, in $O(\lg n (\lg\lg n)^2)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CREW}, and in $O(\lg n\sqrt{\lg n \lg\lg n})$ time doing $O(n\sqrt{\lg n\lg\lg n})$ work on an {\sf EREW}. %B Research Report / Max-Planck-Institut f&#252;r Informatik
[197]
P. G. Bradford and R. Fleischer, “Matching nuts and bolts faster,” Max-Planck-Institut für Informatik, Saarbrücken, MPI-I-1995-1-003, 1995.
Abstract
The problem of matching nuts and bolts is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts such that there is a one-to-one correspondence between the nuts and the bolts, find for each nut its corresponding bolt. We can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem very hard to solve. In fact, the best deterministic solution to date is due to Alon {\it et al\/.} [1] and takes $\Theta(n \log^4 n)$ time. Their solution uses (efficient) graph expanders. In this paper, we give a simpler $\Theta(n \log^2 n)$ time algorithm which uses only a simple (and not so efficient) expander.
Export
BibTeX
@techreport{BradfordFleischer, TITLE = {Matching nuts and bolts faster}, AUTHOR = {Bradford, Phillip Gnassi and Fleischer, Rudolf}, LANGUAGE = {eng}, URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-003}, NUMBER = {MPI-I-1995-1-003}, INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {1995}, DATE = {1995}, ABSTRACT = {The problem of matching nuts and bolts is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts such that there is a one-to-one correspondence between the nuts and the bolts, find for each nut its corresponding bolt. We can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem very hard to solve. In fact, the best deterministic solution to date is due to Alon {\it et al\/.} [1] and takes $\Theta(n \log^4 n)$ time. Their solution uses (efficient) graph expanders. In this paper, we give a simpler $\Theta(n \log^2 n)$ time algorithm which uses only a simple (and not so efficient) expander.}, TYPE = {Research Report / Max-Planck-Institut f&#252;r Informatik}, }
Endnote
%0 Report %A Bradford, Phillip Gnassi %A Fleischer, Rudolf %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Matching nuts and bolts faster : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0014-A846-5 %U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-003 %Y Max-Planck-Institut f&#252;r Informatik %C Saarbr&#252;cken %D 1995 %P 7 p. %X The problem of matching nuts and bolts is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts such that there is a one-to-one correspondence between the nuts and the bolts, find for each nut its corresponding bolt. We can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem very hard to solve. In fact, the best deterministic solution to date is due to Alon {\it et al\/.} [1] and takes $\Theta(n \log^4 n)$ time. Their solution uses (efficient) graph expanders. In this paper, we give a simpler $\Theta(n \log^2 n)$ time algorithm which