Publications

2019
[1]
A. Abboud, K. Bringmann, D. Hermelin, and D. Shabtay, “SETH-Based Lower Bounds for Subset Sum and Bicriteria Path,” in SODA’19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Abboud_SODA19b, TITLE = {{SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path}, AUTHOR = {Abboud, Amir and Bringmann, Karl and Hermelin, Danny and Shabtay, Dvir}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {SODA'19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Abboud, Amir %A Bringmann, Karl %A Hermelin, Danny %A Shabtay, Dvir %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T SETH-Based Lower Bounds for Subset Sum and Bicriteria Path : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E12-8 %D 2018 %B 30th Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2019-01-06 - 2019-01-09 %C San Diego, CA, USA %B SODA'19 %I SIAM
[2]
A. Antoniadis, K. Fleszar, R. Hoeksma, and K. Schewior, “A PTAS for Euclidean TSP with Hyperplane Neighborhoods,” in SODA’19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Antoniadis_SODA19, TITLE = {A {PTAS} for {E}uclidean {TSP} with Hyperplane Neighborhoods}, AUTHOR = {Antoniadis, Antonios and Fleszar, Krzysztof and Hoeksma, Ruben and Schewior, Kevin}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {SODA'19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Antoniadis, Antonios %A Fleszar, Krzysztof %A Hoeksma, Ruben %A Schewior, Kevin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T A PTAS for Euclidean TSP with Hyperplane Neighborhoods : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9F3A-B %D 2018 %B 30th Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2019-01-06 - 2019-01-09 %C San Diego, CA, USA %B SODA'19 %I SIAM
[3]
F. Ban, V. Bhattiprolu, K. Bringmann, P. Kolev, E. Lee, and D. Woodruff, “A PTAS for l_p-Low Rank Approximation,” in SODA’19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Ban_SODA19a, TITLE = {A {PTAS} for $\ell_p$-Low Rank Approximation}, AUTHOR = {Ban, Frank and Bhattiprolu, Vijay and Bringmann, Karl and Kolev, Pavel and Lee, Euiwoong and Woodruff, David}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {SODA'19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Ban, Frank %A Bhattiprolu, Vijay %A Bringmann, Karl %A Kolev, Pavel %A Lee, Euiwoong %A Woodruff, David %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A PTAS for l_p-Low Rank Approximation : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E0E-E %D 2018 %B 30th Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2019-01-06 - 2019-01-09 %C San Diego, CA, USA %B SODA'19 %I SIAM
[4]
K. Bringmann, M. Künnemann, and P. Wellnitz, “Few Matches or Almost Periodicity: Faster Pattern Matching with Mismatches in Compressed Texts,” in SODA’19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Bringmann_SODA19c, TITLE = {Few Matches or Almost Periodicity: {F}aster Pattern Matching with Mismatches in Compressed Texts}, AUTHOR = {Bringmann, Karl and K{\"u}nnemann, Marvin and Wellnitz, Philip}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {SODA'19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Künnemann, Marvin %A Wellnitz, Philip %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Few Matches or Almost Periodicity: Faster Pattern Matching with Mismatches in Compressed Texts : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E1F-B %D 2018 %B 30th Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2019-01-06 - 2019-01-09 %C San Diego, CA, USA %B SODA'19 %I SIAM
[5]
K. Bringmann, M. Künnemann, and A. Nusser, “Fréchet Distance Under Translation: Conditional Hardness and an Algorithm via Offline Dynamic Grid Reachability,” in SODA’19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Bringmann_SODA19d, TITLE = {{F}r\'{e}chet Distance Under Translation: {C}onditional Hardness and an Algorithm via Offline Dynamic Grid Reachability}, AUTHOR = {Bringmann, Karl and K{\"u}nnemann, Marvin and Nusser, Andr{\'e}}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {SODA'19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Künnemann, Marvin %A Nusser, André %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fréchet Distance Under Translation: Conditional Hardness and an Algorithm via Offline Dynamic Grid Reachability : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E29-F %D 2018 %B 30th Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2019-01-06 - 2019-01-09 %C San Diego, CA, USA %B SODA'19 %I SIAM
[6]
P. Bürgisser, C. Ikenmeyer, and G. Panova, “No Occurrence Obstructions in Geometric Complexity Theory,” Journal of the American Mathematical Society, vol. 32, 2019.
Export
BibTeX
@article{Buergisser2019, TITLE = {No Occurrence Obstructions in Geometric Complexity Theory}, AUTHOR = {B{\"u}rgisser, Peter and Ikenmeyer, Christian and Panova, Greta}, LANGUAGE = {eng}, ISSN = {0894-0347}, DOI = {10.1090/jams/908}, PUBLISHER = {The Society}, ADDRESS = {Providence, R.I.}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, JOURNAL = {Journal of the American Mathematical Society}, VOLUME = {32}, PAGES = {163--193}, }
Endnote
%0 Journal Article %A Bürgisser, Peter %A Ikenmeyer, Christian %A Panova, Greta %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T No Occurrence Obstructions in Geometric Complexity Theory : %G eng %U http://hdl.handle.net/21.11116/0000-0002-72B9-D %R 10.1090/jams/908 %7 2018 %D 2019 %J Journal of the American Mathematical Society %O J. Amer. Math. Soc. %V 32 %& 163 %P 163 - 193 %I The Society %C Providence, R.I. %@ false
[7]
L. S. Chandran, D. Issac, and S. Zhou, “Hadwiger’s Conjecture for Squares of 2-Trees,” European Journal of Combinatorics, vol. 76, 2019.
Export
BibTeX
@article{CHANDRAN2019hadwiger, TITLE = {Hadwiger's Conjecture for Squares of 2-Trees}, AUTHOR = {Chandran, L. Sunil and Issac, Davis and Zhou, Sanming}, LANGUAGE = {eng}, ISSN = {0195-6698}, DOI = {10.1016/j.ejc.2018.10.003}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, DATE = {2019}, JOURNAL = {European Journal of Combinatorics}, VOLUME = {76}, PAGES = {159--174}, }
Endnote
%0 Journal Article %A Chandran, L. Sunil %A Issac, Davis %A Zhou, Sanming %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Hadwiger's Conjecture for Squares of 2-Trees : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E5B-7 %R 10.1016/j.ejc.2018.10.003 %7 2018 %D 2019 %J European Journal of Combinatorics %V 76 %& 159 %P 159 - 174 %I Elsevier %C Amsterdam %@ false
[8]
E. Cruciani, E. Natale, and G. Scornavacca, “Rigorous Analysis of a Label Propagation Algorithm for Distributed Community Detection,” in Thirty-Third AAAI Conference on Artificial Intelligence, Honolulu, HI, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Cruciani_aaai18, TITLE = {Rigorous Analysis of a Label Propagation Algorithm for Distributed Community Detection}, AUTHOR = {Cruciani, Emilio and Natale, Emanuele and Scornavacca, Giacomo}, LANGUAGE = {eng}, PUBLISHER = {AAAI}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Thirty-Third AAAI Conference on Artificial Intelligence}, ADDRESS = {Honolulu, HI, USA}, }
Endnote
%0 Conference Proceedings %A Cruciani, Emilio %A Natale, Emanuele %A Scornavacca, Giacomo %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Rigorous Analysis of a Label Propagation Algorithm for Distributed Community Detection : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A985-9 %D 2018 %B Thirty-Third AAAI Conference on Artificial Intelligence %Z date of event: 2019-01-27 - 2019-02-01 %C Honolulu, HI, USA %B Thirty-Third AAAI Conference on Artificial Intelligence %I AAAI
[9]
G. Jindal and M. Bläser, “On the Complexity of Symmetric Polynomials,” in 10th Innovations in Theoretical Computer Science (ITCS 2019), San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Jindal_ITCS2019, TITLE = {On the Complexity of Symmetric Polynomials}, AUTHOR = {Jindal, Gorav and Bl{\"a}ser, Markus}, LANGUAGE = {eng}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {10th Innovations in Theoretical Computer Science (ITCS 2019)}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Jindal, Gorav %A Bläser, Markus %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On the Complexity of Symmetric Polynomials : %G eng %U http://hdl.handle.net/21.11116/0000-0002-ABCC-8 %D 2018 %B 10th Innovations in Theoretical Computer Science %Z date of event: 2019-01-10 - 2019-01-12 %C San Diego, CA, USA %B 10th Innovations in Theoretical Computer Science
[10]
E. Oh, “Optimal Algorithm for Geodesic Nearest-point Geodesic Voronoi Diagrams in Simple Polygons,” in SODA’19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Oh_SODA19d, TITLE = {Optimal Algorithm for Geodesic Nearest-point Geodesic {V}oronoi Diagrams in Simple Polygons}, AUTHOR = {Oh, Eunjin}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {SODA'19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Oh, Eunjin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Optimal Algorithm for Geodesic Nearest-point Geodesic Voronoi Diagrams in Simple Polygons : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA78-8 %D 2018 %B 30th Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2019-01-06 - 2019-01-09 %C San Diego, CA, USA %B SODA'19 %I SIAM
[11]
A. Pandey, G. Jindal, M. Bläser, and V. Bhargava, “Deterministic PTAS for the Algebraic Rank,” in SODA’19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms, San Diego, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Bhargava_SODA19d, TITLE = {Deterministic {PTAS} for the Algebraic Rank}, AUTHOR = {Pandey, Anurag and Jindal, Gorav and Bl{\"a}ser, Markus and Bhargava, Vishwas}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {SODA'19, 30th Annual ACM-SIAM Symposium on Discrete Algorithms}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Pandey, Anurag %A Jindal, Gorav %A Bläser, Markus %A Bhargava, Vishwas %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Deterministic PTAS for the Algebraic Rank : %G eng %U http://hdl.handle.net/21.11116/0000-0002-ABAD-B %D 2018 %B 30th Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2019-01-06 - 2019-01-09 %C San Diego, CA, USA %B SODA'19 %I SIAM
2018
[12]
A. Abboud, A. Backurs, K. Bringmann, and M. Künnemann, “Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve,” 2018. [Online]. Available: http://arxiv.org/abs/1803.00796. (arXiv: 1803.00796)
Abstract
Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size $n$ of data that originally has size $N$, and we want to solve a problem with time complexity $T(\cdot)$. The naive strategy of "decompress-and-solve" gives time $T(N)$, whereas "the gold standard" is time $T(n)$: to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The $O(nN\sqrt{\log{N/n}})$ bound for LCS and the $O(\min\{N \log N, nM\})$ bound for Pattern Matching with Wildcards are optimal up to $N^{o(1)}$ factors, under the Strong Exponential Time Hypothesis. (Here, $M$ denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the $k$-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.
Export
BibTeX
@online{Abboud_arXiv1803.00796, TITLE = {Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve}, AUTHOR = {Abboud, Amir and Backurs, Arturs and Bringmann, Karl and K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1803.00796}, EPRINT = {1803.00796}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size $n$ of data that originally has size $N$, and we want to solve a problem with time complexity $T(\cdot)$. The naive strategy of "decompress-and-solve" gives time $T(N)$, whereas "the gold standard" is time $T(n)$: to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: -- The $O(nN\sqrt{\log{N/n}})$ bound for LCS and the $O(\min\{N \log N, nM\})$ bound for Pattern Matching with Wildcards are optimal up to $N^{o(1)}$ factors, under the Strong Exponential Time Hypothesis. (Here, $M$ denotes the uncompressed length of the compressed pattern.) -- Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the $k$-Clique conjecture. -- We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.}, }
Endnote
%0 Report %A Abboud, Amir %A Backurs, Arturs %A Bringmann, Karl %A Künnemann, Marvin %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3E38-C %U http://arxiv.org/abs/1803.00796 %D 2018 %X Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size $n$ of data that originally has size $N$, and we want to solve a problem with time complexity $T(\cdot)$. The naive strategy of "decompress-and-solve" gives time $T(N)$, whereas "the gold standard" is time $T(n)$: to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The $O(nN\sqrt{\log{N/n}})$ bound for LCS and the $O(\min\{N \log N, nM\})$ bound for Pattern Matching with Wildcards are optimal up to $N^{o(1)}$ factors, under the Strong Exponential Time Hypothesis. (Here, $M$ denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the $k$-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness. %K Computer Science, Computational Complexity, cs.CC,Computer Science, Data Structures and Algorithms, cs.DS
[13]
A. Abboud, K. Bringmann, H. Dell, and J. Nederlof, “More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture,” in STOC’18, 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 2018.
Export
BibTeX
@inproceedings{Abboud_STOC2018, TITLE = {More Consequences of Falsifying {SETH} and the Orthogonal Vectors Conjecture}, AUTHOR = {Abboud, Amir and Bringmann, Karl and Dell, Holger and Nederlof, Jesper}, LANGUAGE = {eng}, ISBN = {978-1-4503-5559-9}, DOI = {10.1145/3188745.3188938}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {STOC'18, 50th Annual ACM SIGACT Symposium on Theory of Computing}, PAGES = {253--266}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Abboud, Amir %A Bringmann, Karl %A Dell, Holger %A Nederlof, Jesper %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture : %G eng %U http://hdl.handle.net/21.11116/0000-0002-1707-D %R 10.1145/3188745.3188938 %D 2018 %B 50th Annual ACM SIGACT Symposium on Theory of Computing %Z date of event: 2018-06-25 - 2017-06-29 %C Los Angeles, CA, USA %B STOC'18 %P 253 - 266 %I ACM %@ 978-1-4503-5559-9
[14]
A. Abboud and K. Bringmann, “Tighter Connections Between Formula-SAT and Shaving Logs,” in 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), Prague, Czech Republic, 2018.
Export
BibTeX
@inproceedings{Abboud_ICALP2018, TITLE = {Tighter Connections Between Formula-{SAT} and Shaving Logs}, AUTHOR = {Abboud, Amir and Bringmann, Karl}, LANGUAGE = {eng}, ISBN = {978-3-95977-076-7}, URL = {urn:nbn:de:0030-drops-90129}, DOI = {10.4230/LIPIcs.ICALP.2018.8}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, EDITOR = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D{\'a}niel and Sannella, Donald}, PAGES = {1--18}, EID = {8}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {107}, ADDRESS = {Prague, Czech Republic}, }
Endnote
%0 Conference Proceedings %A Abboud, Amir %A Bringmann, Karl %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Tighter Connections Between Formula-SAT and Shaving Logs : %G eng %U http://hdl.handle.net/21.11116/0000-0002-16FB-B %R 10.4230/LIPIcs.ICALP.2018.8 %U urn:nbn:de:0030-drops-90129 %D 2018 %B 45th International Colloquium on Automata, Languages, and Programming %Z date of event: 2018-07-09 - 2018-07-13 %C Prague, Czech Republic %B 45th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Kaklamanis, Christos; Marx, Dániel; Sannella, Donald %P 1 - 18 %Z sequence number: 8 %I Schloss Dagstuhl %@ 978-3-95977-076-7 %B Leibniz International Proceedings in Informatics %N 107 %U http://drops.dagstuhl.de/opus/volltexte/2018/9012/http://drops.dagstuhl.de/doku/urheberrecht1.html
[15]
A. Abboud and K. Bringmann, “Tighter Connections Between Formula-SAT and Shaving Logs,” 2018. [Online]. Available: http://arxiv.org/abs/1804.08978. (arXiv: 1804.08978)
Abstract
A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the $O(n^2)$ dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to $O(n^2/\log^2 n)$ in several ways and using a variety of ingenious tricks. This line of research, also known as "the art of shaving log factors", lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time $O(n^2/\log^3 n)$? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on Boolean formulas (Formula-SAT) faster than exhaustive search. They show that an $O(n^2/\log^{1000} n)$ algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fr\'echet distance problem from Computational Geometry, we show that an $O(n^2/\log^{7+\varepsilon} n)$ runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size $s$ over $n$ variables to LCS on sequences of length $N=2^{n/2} \cdot s^{1+o(1)}$. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with $N=2^{n/2} \cdot s^c$, for some $c \geq 100$.
Export
BibTeX
@online{Abboud_arXiv1804.08978, TITLE = {Tighter Connections Between Formula-{SAT} and Shaving Logs}, AUTHOR = {Abboud, Amir and Bringmann, Karl}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1804.08978}, EPRINT = {1804.08978}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the $O(n^2)$ dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to $O(n^2/\log^2 n)$ in several ways and using a variety of ingenious tricks. This line of research, also known as "the art of shaving log factors", lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time $O(n^2/\log^3 n)$? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on Boolean formulas (Formula-SAT) faster than exhaustive search. They show that an $O(n^2/\log^{1000} n)$ algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fr\'echet distance problem from Computational Geometry, we show that an $O(n^2/\log^{7+\varepsilon} n)$ runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size $s$ over $n$ variables to LCS on sequences of length $N=2^{n/2} \cdot s^{1+o(1)}$. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with $N=2^{n/2} \cdot s^c$, for some $c \geq 100$.}, }
Endnote
%0 Report %A Abboud, Amir %A Bringmann, Karl %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Tighter Connections Between Formula-SAT and Shaving Logs : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3DF7-5 %U http://arxiv.org/abs/1804.08978 %D 2018 %X A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the $O(n^2)$ dynamic programming solution to the Longest Common Subsequence problem (LCS) was improved to $O(n^2/\log^2 n)$ in several ways and using a variety of ingenious tricks. This line of research, also known as "the art of shaving log factors", lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time $O(n^2/\log^3 n)$? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC'16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on Boolean formulas (Formula-SAT) faster than exhaustive search. They show that an $O(n^2/\log^{1000} n)$ algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Fr\'echet distance problem from Computational Geometry, we show that an $O(n^2/\log^{7+\varepsilon} n)$ runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size $s$ over $n$ variables to LCS on sequences of length $N=2^{n/2} \cdot s^{1+o(1)}$. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with $N=2^{n/2} \cdot s^c$, for some $c \geq 100$. %K Computer Science, Computational Complexity, cs.CC,Computer Science, Data Structures and Algorithms, cs.DS
[16]
A. Abboud, K. Bringmann, D. Hermelin, and D. Shabtay, “SETH-Based Lower Bounds for Subset Sum and Bicriteria Path,” 2018. [Online]. Available: http://arxiv.org/abs/1704.04546. (arXiv: 1704.04546)
Abstract
Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial $O^{*}(T)$-time algorithm for Subset-Sum on $n$ numbers and target $T$ cannot be improved to time $T^{1-\varepsilon}\cdot 2^{o(n)}$ for any $\varepsilon>0$, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of $N$ given instances of Subset-Sum is a YES instance requires time $(N T)^{1-o(1)}$. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with $m$ edges and edge lengths bounded by $L$, we show that the $O(Lm)$ pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to $\tilde{O}(L+m)$, in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017).
Export
BibTeX
@online{Abboud_arXiv1704.04546, TITLE = {{SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path}, AUTHOR = {Abboud, Amir and Bringmann, Karl and Hermelin, Danny and Shabtay, Dvir}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1704.04546}, EPRINT = {1704.04546}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial $O^{*}(T)$-time algorithm for Subset-Sum on $n$ numbers and target $T$ cannot be improved to time $T^{1-\varepsilon}\cdot 2^{o(n)}$ for any $\varepsilon>0$, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of $N$ given instances of Subset-Sum is a YES instance requires time $(N T)^{1-o(1)}$. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with $m$ edges and edge lengths bounded by $L$, we show that the $O(Lm)$ pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to $\tilde{O}(L+m)$, in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017).}, }
Endnote
%0 Report %A Abboud, Amir %A Bringmann, Karl %A Hermelin, Danny %A Shabtay, Dvir %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T SETH-Based Lower Bounds for Subset Sum and Bicriteria Path : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E17-3 %U http://arxiv.org/abs/1704.04546 %D 2018 %X Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial $O^{*}(T)$-time algorithm for Subset-Sum on $n$ numbers and target $T$ cannot be improved to time $T^{1-\varepsilon}\cdot 2^{o(n)}$ for any $\varepsilon>0$, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of $N$ given instances of Subset-Sum is a YES instance requires time $(N T)^{1-o(1)}$. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with $m$ edges and edge lengths bounded by $L$, we show that the $O(Lm)$ pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to $\tilde{O}(L+m)$, in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017). %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Computational Complexity, cs.CC
[17]
M. Abrahamsen, A. Adamaszek, K. Bringmann, V. Cohen-Addad, M. Mehr, E. Rotenberg, A. Roytman, and M. Thorup, “Fast Fencing,” 2018. [Online]. Available: http://arxiv.org/abs/1804.00101. (arXiv: 1804.00101)
Abstract
We consider very natural "fence enclosure" problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set $S$ of $n$ points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose $n$ unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most $k$ closed curves and pay no cost per curve. For the variant with at most $k$ closed curves, we present an algorithm that is polynomial in both $n$ and $k$. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most $k$ curves in $n^{O(k)}$ time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with $k$ curves is NP-hard for general $k$. Our polynomial time algorithm refutes this unless P equals NP.
Export
BibTeX
@online{Abrahamsen_arXiv1804.00101, TITLE = {Fast Fencing}, AUTHOR = {Abrahamsen, Mikkel and Adamaszek, Anna and Bringmann, Karl and Cohen-Addad, Vincent and Mehr, Mehran and Rotenberg, Eva and Roytman, Alan and Thorup, Mikkel}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1804.00101}, EPRINT = {1804.00101}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We consider very natural "fence enclosure" problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set $S$ of $n$ points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose $n$ unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most $k$ closed curves and pay no cost per curve. For the variant with at most $k$ closed curves, we present an algorithm that is polynomial in both $n$ and $k$. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most $k$ curves in $n^{O(k)}$ time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with $k$ curves is NP-hard for general $k$. Our polynomial time algorithm refutes this unless P equals NP.}, }
Endnote
%0 Report %A Abrahamsen, Mikkel %A Adamaszek, Anna %A Bringmann, Karl %A Cohen-Addad, Vincent %A Mehr, Mehran %A Rotenberg, Eva %A Roytman, Alan %A Thorup, Mikkel %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Fast Fencing : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3DFE-E %U http://arxiv.org/abs/1804.00101 %D 2018 %X We consider very natural "fence enclosure" problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set $S$ of $n$ points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose $n$ unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most $k$ closed curves and pay no cost per curve. For the variant with at most $k$ closed curves, we present an algorithm that is polynomial in both $n$ and $k$. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most $k$ curves in $n^{O(k)}$ time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with $k$ curves is NP-hard for general $k$. Our polynomial time algorithm refutes this unless P equals NP. %K Computer Science, Computational Geometry, cs.CG
[18]
M. Abrahamsen, A. Adamaszek, K. Bringmann, V. Cohen-Addad, M. Mehr, E. Rotenberg, A. Roytman, and M. Thorup, “Fast Fencing,” in STOC’18, 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 2018.
Export
BibTeX
@inproceedings{Abrahamsen_STOC2018, TITLE = {Fast Fencing}, AUTHOR = {Abrahamsen, Mikkel and Adamaszek, Anna and Bringmann, Karl and Cohen-Addad, Vincent and Mehr, Mehran and Rotenberg, Eva and Roytman, Alan and Thorup, Mikkel}, LANGUAGE = {eng}, ISBN = {978-1-4503-5559-9}, DOI = {10.1145/3188745.3188878}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {STOC'18, 50th Annual ACM SIGACT Symposium on Theory of Computing}, PAGES = {564--573}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Abrahamsen, Mikkel %A Adamaszek, Anna %A Bringmann, Karl %A Cohen-Addad, Vincent %A Mehr, Mehran %A Rotenberg, Eva %A Roytman, Alan %A Thorup, Mikkel %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Fast Fencing : %G eng %U http://hdl.handle.net/21.11116/0000-0002-171F-3 %R 10.1145/3188745.3188878 %D 2018 %B 50th Annual ACM SIGACT Symposium on Theory of Computing %Z date of event: 2018-06-25 - 2017-06-29 %C Los Angeles, CA, USA %B STOC'18 %P 564 - 573 %I ACM %@ 978-1-4503-5559-9
[19]
A. Adamaszek, P. Chalermsook, A. Ene, and A. Wiese, “Submodular Unsplittable Flow on Trees,” Mathematical Programming / B, vol. 172, no. 1–2, 2018.
Export
BibTeX
@article{Adamaszek2018, TITLE = {Submodular Unsplittable Flow on Trees}, AUTHOR = {Adamaszek, Anna and Chalermsook, Parinya and Ene, Alina and Wiese, Andreas}, LANGUAGE = {eng}, ISSN = {0025-5610}, DOI = {10.1007/s10107-017-1218-4}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Mathematical Programming / B}, VOLUME = {172}, NUMBER = {1-2}, PAGES = {565--589}, }
Endnote
%0 Journal Article %A Adamaszek, Anna %A Chalermsook, Parinya %A Ene, Alina %A Wiese, Andreas %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Submodular Unsplittable Flow on Trees : %G eng %U http://hdl.handle.net/21.11116/0000-0000-73B6-1 %R 10.1007/s10107-017-1218-4 %7 2018-01-17 %D 2018 %J Mathematical Programming / B %V 172 %N 1-2 %& 565 %P 565 - 589 %I Springer %C Berlin %@ false
[20]
A. Adamaszek, A. Antoniadis, A. Kumar, and T. Mömke, “Approximating Airports and Railways,” in 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018), Caen, France, 2018.
Export
BibTeX
@inproceedings{Adamaszek_STACS2018, TITLE = {Approximating Airports and Railways}, AUTHOR = {Adamaszek, Anna and Antoniadis, Antonios and Kumar, Amit and M{\"o}mke, Tobias}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-062-0}, URL = {urn:nbn:de:0030-drops-85183}, DOI = {10.4230/LIPIcs.STACS.2018.5}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)}, EDITOR = {Niedermeier, Rolf and Vall{\'e}e, Brigitte}, PAGES = {1--13}, EID = {5}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {96}, ADDRESS = {Caen, France}, }
Endnote
%0 Conference Proceedings %A Adamaszek, Anna %A Antoniadis, Antonios %A Kumar, Amit %A Mömke, Tobias %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Approximating Airports and Railways : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9F43-0 %R 10.4230/LIPIcs.STACS.2018.5 %U urn:nbn:de:0030-drops-85183 %D 2018 %B 35th Symposium on Theoretical Aspects of Computer Science %Z date of event: 2018-02-28 - 2018-03-03 %C Caen, France %B 35th Symposium on Theoretical Aspects of Computer Science %E Niedermeier, Rolf; Vallée, Brigitte %P 1 - 13 %Z sequence number: 5 %I Schloss Dagstuhl %@ 978-3-95977-062-0 %B Leibniz International Proceedings in Informatics %N 96 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/8518/http://drops.dagstuhl.de/doku/urheberrecht1.html
[21]
S. A. Amiri, K.-T. Foerster, and S. Schmid, “Walking Through Waypoints,” in LATIN 2018: Theoretical Informatics, Buenos Aires, Argentinia, 2018.
Export
BibTeX
@inproceedings{Amiri_LATIN2018, TITLE = {Walking Through Waypoints}, AUTHOR = {Amiri, Saeed Akhoondian and Foerster, Klaus-Tycho and Schmid, Stefan}, LANGUAGE = {eng}, ISBN = {978-3-319-77403-9}, DOI = {10.1007/978-3-319-77404-6_4}, PUBLISHER = {Springer}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {LATIN 2018: Theoretical Informatics}, EDITOR = {Bender, Michael A. and Farach-Colton, Mart{\'i}n and Mosteiro, Miguel A.}, PAGES = {37--51}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10807}, ADDRESS = {Buenos Aires, Argentinia}, }
Endnote
%0 Conference Proceedings %A Amiri, Saeed Akhoondian %A Foerster, Klaus-Tycho %A Schmid, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Walking Through Waypoints : %G eng %U http://hdl.handle.net/21.11116/0000-0002-5765-B %R 10.1007/978-3-319-77404-6_4 %D 2018 %B 13th Latin American Theoretical Informatics Symposium %Z date of event: 2018-04-16 - 2018-04-19 %C Buenos Aires, Argentinia %B LATIN 2018: Theoretical Informatics %E Bender, Michael A.; Farach-Colton, Martín; Mosteiro, Miguel A. %P 37 - 51 %I Springer %@ 978-3-319-77403-9 %B Lecture Notes in Computer Science %N 10807
[22]
S. A. Amiri, S. Dudycz, S. Schmid, and S. Wiederrecht, “Congestion-Free Rerouting of Flows on DAGs,” in 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), Prague, Czech Republic, 2018.
Export
BibTeX
@inproceedings{Amiri_ICALP2018, TITLE = {Congestion-Free Rerouting of Flows on {DAGs}}, AUTHOR = {Amiri, Saeed Akhoondian and Dudycz, Szymon and Schmid, Stefan and Wiederrecht, Sebastian}, LANGUAGE = {eng}, ISBN = {978-3-95977-076-7}, URL = {urn:nbn:de:0030-drops-91471}, DOI = {10.4230/LIPIcs.ICALP.2018.143}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, EDITOR = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D{\'a}niel and Sannella, Donald}, PAGES = {1--13}, EID = {143}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {107}, ADDRESS = {Prague, Czech Republic}, }
Endnote
%0 Conference Proceedings %A Amiri, Saeed Akhoondian %A Dudycz, Szymon %A Schmid, Stefan %A Wiederrecht, Sebastian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Congestion-Free Rerouting of Flows on DAGs : %G eng %U http://hdl.handle.net/21.11116/0000-0002-707F-2 %R 10.4230/LIPIcs.ICALP.2018.143 %U urn:nbn:de:0030-drops-91471 %D 2018 %B 45th International Colloquium on Automata, Languages, and Programming %Z date of event: 2018-07-09 - 2018-07-13 %C Prague, Czech Republic %B 45th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Kaklamanis, Christos; Marx, Dániel; Sannella, Donald %P 1 - 13 %Z sequence number: 143 %I Schloss Dagstuhl %@ 978-3-95977-076-7 %B Leibniz International Proceedings in Informatics %N 107 %U http://drops.dagstuhl.de/opus/volltexte/2018/9147/http://drops.dagstuhl.de/doku/urheberrecht1.html
[23]
S. A. Amiri, K.-T. Foerster, R. Jacob, and S. Schmid, “Charting the Algorithmic Complexity of Waypoint Routing,” ACM SIGCOMM Computer Communication Review, vol. 48, no. 1, 2018.
Export
BibTeX
@article{Amiri_CCR2018, TITLE = {Charting the Algorithmic Complexity of Waypoint Routing}, AUTHOR = {Amiri, Saeed Akhoondian and Foerster, Klaus-Tycho and Jacob, Riko and Schmid, Stefan}, LANGUAGE = {eng}, ISSN = {0146-4833}, DOI = {10.1145/3211852.3211859}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM SIGCOMM Computer Communication Review}, VOLUME = {48}, NUMBER = {1}, PAGES = {42--48}, }
Endnote
%0 Journal Article %A Amiri, Saeed Akhoondian %A Foerster, Klaus-Tycho %A Jacob, Riko %A Schmid, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Charting the Algorithmic Complexity of Waypoint Routing : %G eng %U http://hdl.handle.net/21.11116/0000-0002-7083-B %R 10.1145/3211852.3211859 %7 2018 %D 2018 %J ACM SIGCOMM Computer Communication Review %V 48 %N 1 %& 42 %P 42 - 48 %I ACM %C New York, NY %@ false
[24]
S. A. Amiri, P. Ossona de Mendez, R. Rabinovich, and S. Siebertz, “Distributed Domination on Graph Classes of Bounded Expansion,” in SPAA’18, 30th ACM Symposium on Parallelism in Algorithms and Architectures, Vienna, Austria, 2018.
Export
BibTeX
@inproceedings{Amiri_SPAA2018, TITLE = {Distributed Domination on Graph Classes of Bounded Expansion}, AUTHOR = {Amiri, Saeed Akhoondian and Ossona de Mendez, Patrice and Rabinovich, Roman and Siebertz, Sebastian}, LANGUAGE = {eng}, ISBN = {978-1-4503-5799-9}, DOI = {10.1145/3210377.3210383}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {SPAA'18, 30th ACM Symposium on Parallelism in Algorithms and Architectures}, PAGES = {143--151}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A Amiri, Saeed Akhoondian %A Ossona de Mendez, Patrice %A Rabinovich, Roman %A Siebertz, Sebastian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Distributed Domination on Graph Classes of Bounded Expansion : %G eng %U http://hdl.handle.net/21.11116/0000-0002-7081-D %R 10.1145/3210377.3210383 %D 2018 %B 30th ACM Symposium on Parallelism in Algorithms and Architectures %Z date of event: 2018-07-16 - 2018-07-18 %C Vienna, Austria %B SPAA'18 %P 143 - 151 %I ACM %@ 978-1-4503-5799-9
[25]
A. Antoniadis, C. Fischer, and A. Tonnis, “A Collection of Lower Bounds for Online Matching on the Line,” in LATIN 2018: Theoretical Informatics, Buenos Aires, Argentinia, 2018.
Export
BibTeX
@inproceedings{AntoniadisLATIN2018, TITLE = {A Collection of Lower Bounds for Online Matching on the Line}, AUTHOR = {Antoniadis, Antonios and Fischer, Carsten and Tonnis, Andreas}, LANGUAGE = {eng}, ISBN = {978-3-319-77403-9}, DOI = {10.1007/978-3-319-77404-6_5}, PUBLISHER = {Springer}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {LATIN 2018: Theoretical Informatics}, EDITOR = {Bender, Michael A. and Farach-Colton, Mart{\'i}n and Mosteiro, Miguel A.}, PAGES = {52--65}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10807}, ADDRESS = {Buenos Aires, Argentinia}, }
Endnote
%0 Conference Proceedings %A Antoniadis, Antonios %A Fischer, Carsten %A Tonnis, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Collection of Lower Bounds for Online Matching on the Line : %G eng %U http://hdl.handle.net/21.11116/0000-0002-5763-D %R 10.1007/978-3-319-77404-6_5 %D 2018 %B 13th Latin American Theoretical Informatics Symposium %Z date of event: 2018-04-16 - 2018-04-19 %C Buenos Aires, Argentinia %B LATIN 2018: Theoretical Informatics %E Bender, Michael A.; Farach-Colton, Martín; Mosteiro, Miguel A. %P 52 - 65 %I Springer %@ 978-3-319-77403-9 %B Lecture Notes in Computer Science %N 10807
[26]
A. Antoniadis, K. Fleszar, R. Hoeksma, and K. Schewior, “A PTAS for Euclidean TSP with Hyperplane Neighborhoods,” 2018. [Online]. Available: http://arxiv.org/abs/1804.03953. (arXiv: 1804.03953)
Abstract
In the Traveling Salesperson Problem with Neighborhoods (TSPN), we are given a collection of geometric regions in some space. The goal is to output a tour of minimum length that visits at least one point in each region. Even in the Euclidean plane, TSPN is known to be APX-hard, which gives rise to studying more tractable special cases of the problem. In this paper, we focus on the fundamental special case of regions that are hyperplanes in the $d$-dimensional Euclidean space. This case contrasts the much-better understood case of so-called fat regions. While for $d=2$ an exact algorithm with running time $O(n^5)$ is known, settling the exact approximability of the problem for $d=3$ has been repeatedly posed as an open question. To date, only an approximation algorithm with guarantee exponential in $d$ is known, and NP-hardness remains open. For arbitrary fixed $d$, we develop a Polynomial Time Approximation Scheme (PTAS) that works for both the tour and path version of the problem. Our algorithm is based on approximating the convex hull of the optimal tour by a convex polytope of bounded complexity. Such polytopes are represented as solutions of a sophisticated LP formulation, which we combine with the enumeration of crucial properties of the tour. As the approximation guarantee approaches $1$, our scheme adjusts the complexity of the considered polytopes accordingly. In the analysis of our approximation scheme, we show that our search space includes a sufficiently good approximation of the optimum. To do so, we develop a novel and general sparsification technique to transform an arbitrary convex polytope into one with a constant number of vertices and, in turn, into one of bounded complexity in the above sense. Hereby, we maintain important properties of the polytope.
Export
BibTeX
@online{Antoniadis_arXiv1804.03953, TITLE = {A {PTAS} for {E}uclidean {TSP} with Hyperplane Neighborhoods}, AUTHOR = {Antoniadis, Antonios and Fleszar, Krzysztof and Hoeksma, Ruben and Schewior, Kevin}, URL = {http://arxiv.org/abs/1804.03953}, EPRINT = {1804.03953}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In the Traveling Salesperson Problem with Neighborhoods (TSPN), we are given a collection of geometric regions in some space. The goal is to output a tour of minimum length that visits at least one point in each region. Even in the Euclidean plane, TSPN is known to be APX-hard, which gives rise to studying more tractable special cases of the problem. In this paper, we focus on the fundamental special case of regions that are hyperplanes in the $d$-dimensional Euclidean space. This case contrasts the much-better understood case of so-called fat regions. While for $d=2$ an exact algorithm with running time $O(n^5)$ is known, settling the exact approximability of the problem for $d=3$ has been repeatedly posed as an open question. To date, only an approximation algorithm with guarantee exponential in $d$ is known, and NP-hardness remains open. For arbitrary fixed $d$, we develop a Polynomial Time Approximation Scheme (PTAS) that works for both the tour and path version of the problem. Our algorithm is based on approximating the convex hull of the optimal tour by a convex polytope of bounded complexity. Such polytopes are represented as solutions of a sophisticated LP formulation, which we combine with the enumeration of crucial properties of the tour. As the approximation guarantee approaches $1$, our scheme adjusts the complexity of the considered polytopes accordingly. In the analysis of our approximation scheme, we show that our search space includes a sufficiently good approximation of the optimum. To do so, we develop a novel and general sparsification technique to transform an arbitrary convex polytope into one with a constant number of vertices and, in turn, into one of bounded complexity in the above sense. Hereby, we maintain important properties of the polytope.}, }
Endnote
%0 Report %A Antoniadis, Antonios %A Fleszar, Krzysztof %A Hoeksma, Ruben %A Schewior, Kevin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T A PTAS for Euclidean TSP with Hyperplane Neighborhoods : %U http://hdl.handle.net/21.11116/0000-0002-9F37-E %U http://arxiv.org/abs/1804.03953 %D 2018 %X In the Traveling Salesperson Problem with Neighborhoods (TSPN), we are given a collection of geometric regions in some space. The goal is to output a tour of minimum length that visits at least one point in each region. Even in the Euclidean plane, TSPN is known to be APX-hard, which gives rise to studying more tractable special cases of the problem. In this paper, we focus on the fundamental special case of regions that are hyperplanes in the $d$-dimensional Euclidean space. This case contrasts the much-better understood case of so-called fat regions. While for $d=2$ an exact algorithm with running time $O(n^5)$ is known, settling the exact approximability of the problem for $d=3$ has been repeatedly posed as an open question. To date, only an approximation algorithm with guarantee exponential in $d$ is known, and NP-hardness remains open. For arbitrary fixed $d$, we develop a Polynomial Time Approximation Scheme (PTAS) that works for both the tour and path version of the problem. Our algorithm is based on approximating the convex hull of the optimal tour by a convex polytope of bounded complexity. Such polytopes are represented as solutions of a sophisticated LP formulation, which we combine with the enumeration of crucial properties of the tour. As the approximation guarantee approaches $1$, our scheme adjusts the complexity of the considered polytopes accordingly. In the analysis of our approximation scheme, we show that our search space includes a sufficiently good approximation of the optimum. To do so, we develop a novel and general sparsification technique to transform an arbitrary convex polytope into one with a constant number of vertices and, in turn, into one of bounded complexity in the above sense. Hereby, we maintain important properties of the polytope. %K Computer Science, Data Structures and Algorithms, cs.DS
[27]
A. Antoniadis and K. Schewior, “A Tight Lower Bound for Online Convex Optimization with Switching Costs,” in Approximation and Online Algorithms (WAOA 2017), Vienna, Austria, 2018.
Export
BibTeX
@inproceedings{Antoniadis_WAOA2017, TITLE = {A Tight Lower Bound for Online Convex Optimization with Switching Costs}, AUTHOR = {Antoniadis, Antonios and Schewior, Kevin}, LANGUAGE = {eng}, ISBN = {978-3-319-89440-9}, DOI = {10.1007/978-3-319-89441-6_13}, PUBLISHER = {Springer}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {Approximation and Online Algorithms (WAOA 2017)}, EDITOR = {Solis-Oba, Roberto and Fleischer, Rudolf}, PAGES = {164--165}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10787}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A Antoniadis, Antonios %A Schewior, Kevin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T A Tight Lower Bound for Online Convex Optimization with Switching Costs : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9F30-5 %R 10.1007/978-3-319-89441-6_13 %D 2018 %B 15th Workshop on Approximation and Online Algorithms %Z date of event: 2017-09-07 - 2017-09-08 %C Vienna, Austria %B Approximation and Online Algorithms %E Solis-Oba, Roberto; Fleischer, Rudolf %P 164 - 165 %I Springer %@ 978-3-319-89440-9 %B Lecture Notes in Computer Science %N 10787
[28]
A. Antoniadis and A. Cristi, “Near Optimal Mechanism for Energy Aware Scheduling,” in Algorithmic Game Theory (SAGT 2018), Beijing, China, 2018.
Export
BibTeX
@inproceedings{Antoniadis_SAGT2017, TITLE = {Near Optimal Mechanism for Energy Aware Scheduling}, AUTHOR = {Antoniadis, Antonios and Cristi, Andr{\'e}s}, LANGUAGE = {eng}, ISBN = {978-3-319-99659-2}, DOI = {10.1007/978-3-319-99660-8_4}, PUBLISHER = {Springer}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {Algorithmic Game Theory (SAGT 2018)}, EDITOR = {Deng, Xiaotie}, PAGES = {31--42}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {11059}, ADDRESS = {Beijing, China}, }
Endnote
%0 Conference Proceedings %A Antoniadis, Antonios %A Cristi, Andrés %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Near Optimal Mechanism for Energy Aware Scheduling : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9F48-B %R 10.1007/978-3-319-99660-8_4 %D 2018 %B 11th International Symposium on Algorithmic Game Theory %Z date of event: 2018-09-11 - 2018-09-14 %C Beijing, China %B Algorithmic Game Theory %E Deng, Xiaotie %P 31 - 42 %I Springer %@ 978-3-319-99659-2 %B Lecture Notes in Computer Science %N 11059
[29]
S. Arunachalam, S. Chakraborty, M. Koucký, N. Saurabh, and R. de Wolf, “Improved Bounds on Fourier Entropy and Min-entropy,” 2018. [Online]. Available: http://arxiv.org/abs/1809.09819. (arXiv: 1809.09819)
Abstract
Given a Boolean function $f:\{-1,1\}^n\to \{-1,1\}$, the Fourier distribution assigns probability $\widehat{f}(S)^2$ to $S\subseteq [n]$. The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai asks if there exist a universal constant C>0 such that $H(\hat{f}^2)\leq C Inf(f)$, where $H(\hat{f}^2)$ is the Shannon entropy of the Fourier distribution of $f$ and $Inf(f)$ is the total influence of $f$. 1) We consider the weaker Fourier Min-entropy-Influence (FMEI) conjecture. This asks if $H_{\infty}(\hat{f}^2)\leq C Inf(f)$, where $H_{\infty}(\hat{f}^2)$ is the min-entropy of the Fourier distribution. We show $H_{\infty}(\hat{f}^2)\leq 2C_{\min}^\oplus(f)$, where $C_{\min}^\oplus(f)$ is the minimum parity certificate complexity of $f$. We also show that for every $\epsilon\geq 0$, we have $H_{\infty}(\hat{f}^2)\leq 2\log (\|\hat{f}\|_{1,\epsilon}/(1-\epsilon))$, where $\|\hat{f}\|_{1,\epsilon}$ is the approximate spectral norm of $f$. As a corollary, we verify the FMEI conjecture for the class of read-$k$ $DNF$s (for constant $k$). 2) We show that $H(\hat{f}^2)\leq 2 aUC^\oplus(f)$, where $aUC^\oplus(f)$ is the average unambiguous parity certificate complexity of $f$. This improves upon Chakraborty et al. An important consequence of the FEI conjecture is the long-standing Mansour's conjecture. We show that a weaker version of FEI already implies Mansour's conjecture: is $H(\hat{f}^2)\leq C \min\{C^0(f),C^1(f)\}$?, where $C^0(f), C^1(f)$ are the 0- and 1-certificate complexities of $f$, respectively. 3) We study what FEI implies about the structure of polynomials that 1/3-approximate a Boolean function. We pose a conjecture (which is implied by FEI): no "flat" degree-$d$ polynomial of sparsity $2^{\omega(d)}$ can 1/3-approximate a Boolean function. We prove this conjecture unconditionally for a particular class of polynomials.
Export
BibTeX
@online{Arunachalam_arXiv1809.09819, TITLE = {Improved bounds on {F}ourier entropy and Min-entropy}, AUTHOR = {Arunachalam, Srinivasan and Chakraborty, Sourav and Kouck{\'y}, Michal and Saurabh, Nitin and de Wolf, Ronald}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1809.09819}, EPRINT = {1809.09819}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Given a Boolean function $f:\{-1,1\}^n\to \{-1,1\}$, the Fourier distribution assigns probability $\widehat{f}(S)^2$ to $S\subseteq [n]$. The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai asks if there exist a universal constant C>0 such that $H(\hat{f}^2)\leq C Inf(f)$, where $H(\hat{f}^2)$ is the Shannon entropy of the Fourier distribution of $f$ and $Inf(f)$ is the total influence of $f$. 1) We consider the weaker Fourier Min-entropy-Influence (FMEI) conjecture. This asks if $H_{\infty}(\hat{f}^2)\leq C Inf(f)$, where $H_{\infty}(\hat{f}^2)$ is the min-entropy of the Fourier distribution. We show $H_{\infty}(\hat{f}^2)\leq 2C_{\min}^\oplus(f)$, where $C_{\min}^\oplus(f)$ is the minimum parity certificate complexity of $f$. We also show that for every $\epsilon\geq 0$, we have $H_{\infty}(\hat{f}^2)\leq 2\log (\|\hat{f}\|_{1,\epsilon}/(1-\epsilon))$, where $\|\hat{f}\|_{1,\epsilon}$ is the approximate spectral norm of $f$. As a corollary, we verify the FMEI conjecture for the class of read-$k$ $DNF$s (for constant $k$). 2) We show that $H(\hat{f}^2)\leq 2 aUC^\oplus(f)$, where $aUC^\oplus(f)$ is the average unambiguous parity certificate complexity of $f$. This improves upon Chakraborty et al. An important consequence of the FEI conjecture is the long-standing Mansour's conjecture. We show that a weaker version of FEI already implies Mansour's conjecture: is $H(\hat{f}^2)\leq C \min\{C^0(f),C^1(f)\}$?, where $C^0(f), C^1(f)$ are the 0- and 1-certificate complexities of $f$, respectively. 3) We study what FEI implies about the structure of polynomials that 1/3-approximate a Boolean function. We pose a conjecture (which is implied by FEI): no "flat" degree-$d$ polynomial of sparsity $2^{\omega(d)}$ can 1/3-approximate a Boolean function. We prove this conjecture unconditionally for a particular class of polynomials.}, }
Endnote
%0 Report %A Arunachalam, Srinivasan %A Chakraborty, Sourav %A Koucký, Michal %A Saurabh, Nitin %A de Wolf, Ronald %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Improved Bounds on Fourier Entropy and Min-entropy : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA5A-A %U http://arxiv.org/abs/1809.09819 %D 2018 %X Given a Boolean function $f:\{-1,1\}^n\to \{-1,1\}$, the Fourier distribution assigns probability $\widehat{f}(S)^2$ to $S\subseteq [n]$. The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai asks if there exist a universal constant C>0 such that $H(\hat{f}^2)\leq C Inf(f)$, where $H(\hat{f}^2)$ is the Shannon entropy of the Fourier distribution of $f$ and $Inf(f)$ is the total influence of $f$. 1) We consider the weaker Fourier Min-entropy-Influence (FMEI) conjecture. This asks if $H_{\infty}(\hat{f}^2)\leq C Inf(f)$, where $H_{\infty}(\hat{f}^2)$ is the min-entropy of the Fourier distribution. We show $H_{\infty}(\hat{f}^2)\leq 2C_{\min}^\oplus(f)$, where $C_{\min}^\oplus(f)$ is the minimum parity certificate complexity of $f$. We also show that for every $\epsilon\geq 0$, we have $H_{\infty}(\hat{f}^2)\leq 2\log (\|\hat{f}\|_{1,\epsilon}/(1-\epsilon))$, where $\|\hat{f}\|_{1,\epsilon}$ is the approximate spectral norm of $f$. As a corollary, we verify the FMEI conjecture for the class of read-$k$ $DNF$s (for constant $k$). 2) We show that $H(\hat{f}^2)\leq 2 aUC^\oplus(f)$, where $aUC^\oplus(f)$ is the average unambiguous parity certificate complexity of $f$. This improves upon Chakraborty et al. An important consequence of the FEI conjecture is the long-standing Mansour's conjecture. We show that a weaker version of FEI already implies Mansour's conjecture: is $H(\hat{f}^2)\leq C \min\{C^0(f),C^1(f)\}$?, where $C^0(f), C^1(f)$ are the 0- and 1-certificate complexities of $f$, respectively. 3) We study what FEI implies about the structure of polynomials that 1/3-approximate a Boolean function. We pose a conjecture (which is implied by FEI): no "flat" degree-$d$ polynomial of sparsity $2^{\omega(d)}$ can 1/3-approximate a Boolean function. We prove this conjecture unconditionally for a particular class of polynomials. %K Computer Science, Computational Complexity, cs.CC
[30]
J. Baldus and K. Bringmann, “A Fast Implementation of Near Neighbors Queries for Fréchet Distance (GIS Cup),” 2018. [Online]. Available: http://arxiv.org/abs/1803.00806. (arXiv: 1803.00806)
Abstract
This paper describes an implementation of fast near-neighbours queries (also known as range searching) with respect to the Fr\'echet distance. The algorithm is designed to be efficient on practical data such as GPS trajectories. Our approach is to use a quadtree data structure to enumerate all curves in the database that have similar start and endpoints as the query curve. On these curves we run positive and negative filters to narrow the set of potential results. Only for those trajectories where these heuristics fail, we compute the Fr\'echet distance exactly, by running a novel recursive variant of the classic free-space diagram algorithm. Our implementation won the ACM SIGSPATIAL GIS Cup 2017.
Export
BibTeX
@online{Baldus_arXiv1803.00806, TITLE = {A Fast Implementation of Near Neighbors Queries for {F}r\'{e}chet Distance ({GIS Cup})}, AUTHOR = {Baldus, Julian and Bringmann, Karl}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1803.00806}, EPRINT = {1803.00806}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {This paper describes an implementation of fast near-neighbours queries (also known as range searching) with respect to the Fr\'echet distance. The algorithm is designed to be efficient on practical data such as GPS trajectories. Our approach is to use a quadtree data structure to enumerate all curves in the database that have similar start and endpoints as the query curve. On these curves we run positive and negative filters to narrow the set of potential results. Only for those trajectories where these heuristics fail, we compute the Fr\'echet distance exactly, by running a novel recursive variant of the classic free-space diagram algorithm. Our implementation won the ACM SIGSPATIAL GIS Cup 2017.}, }
Endnote
%0 Report %A Baldus, Julian %A Bringmann, Karl %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Fast Implementation of Near Neighbors Queries for Fréchet Distance (GIS Cup) : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3E1A-E %U http://arxiv.org/abs/1803.00806 %D 2018 %X This paper describes an implementation of fast near-neighbours queries (also known as range searching) with respect to the Fr\'echet distance. The algorithm is designed to be efficient on practical data such as GPS trajectories. Our approach is to use a quadtree data structure to enumerate all curves in the database that have similar start and endpoints as the query curve. On these curves we run positive and negative filters to narrow the set of potential results. Only for those trajectories where these heuristics fail, we compute the Fr\'echet distance exactly, by running a novel recursive variant of the classic free-space diagram algorithm. Our implementation won the ACM SIGSPATIAL GIS Cup 2017. %K Computer Science, Computational Geometry, cs.CG
[31]
G. Ballard, C. Ikenmeyer, J. M. Landsberg, and N. Ryder, “The Geometry of Rank Decompositions of Matrix Multiplication II: 3 x 3 matrices,” 2018. [Online]. Available: http://arxiv.org/abs/1801.00843. (arXiv: 1801.00843)
Abstract
This is the second in a series of papers on rank decompositions of the matrix multiplication tensor. We present new rank $23$ decompositions for the $3\times 3$ matrix multiplication tensor $M_{\langle 3\rangle}$. All our decompositions have symmetry groups that include the standard cyclic permutation of factors but otherwise exhibit a range of behavior. One of them has 11 cubes as summands and admits an unexpected symmetry group of order 12. We establish basic information regarding symmetry groups of decompositions and outline two approaches for finding new rank decompositions of $M_{\langle n\rangle}$ for larger $n$.
Export
BibTeX
@online{Ballard_arXiv1801.00843, TITLE = {The Geometry of Rank Decompositions of Matrix Multiplication II: $3\times 3$ matrices}, AUTHOR = {Ballard, Grey and Ikenmeyer, Christian and Landsberg, J. M. and Ryder, Nick}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1801.00843}, EPRINT = {1801.00843}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {This is the second in a series of papers on rank decompositions of the matrix multiplication tensor. We present new rank $23$ decompositions for the $3\times 3$ matrix multiplication tensor $M_{\langle 3\rangle}$. All our decompositions have symmetry groups that include the standard cyclic permutation of factors but otherwise exhibit a range of behavior. One of them has 11 cubes as summands and admits an unexpected symmetry group of order 12. We establish basic information regarding symmetry groups of decompositions and outline two approaches for finding new rank decompositions of $M_{\langle n\rangle}$ for larger $n$.}, }
Endnote
%0 Report %A Ballard, Grey %A Ikenmeyer, Christian %A Landsberg, J. M. %A Ryder, Nick %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T The Geometry of Rank Decompositions of Matrix Multiplication II: 3 x 3 matrices : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3F64-9 %U http://arxiv.org/abs/1801.00843 %D 2018 %X This is the second in a series of papers on rank decompositions of the matrix multiplication tensor. We present new rank $23$ decompositions for the $3\times 3$ matrix multiplication tensor $M_{\langle 3\rangle}$. All our decompositions have symmetry groups that include the standard cyclic permutation of factors but otherwise exhibit a range of behavior. One of them has 11 cubes as summands and admits an unexpected symmetry group of order 12. We establish basic information regarding symmetry groups of decompositions and outline two approaches for finding new rank decompositions of $M_{\langle n\rangle}$ for larger $n$. %K Computer Science, Computational Complexity, cs.CC,
[32]
G. Ballard, C. Ikenmeyer, J. M. Landsberg, and N. Ryder, “The Geometry of Rank Decompositions of Matrix Multiplication II: 3 x 3 matrices,” Journal of Pure and Applied Algebra, 2018.
Export
BibTeX
@article{Ballard2018, TITLE = {The geometry of rank decompositions of matrix multiplication II: $3\times 3$ matrices}, AUTHOR = {Ballard, Grey and Ikenmeyer, Christian and Landsberg, J. M. and Ryder, Nick}, LANGUAGE = {eng}, ISSN = {0022-4049}, DOI = {10.1016/j.jpaa.2018.10.014}, PUBLISHER = {North-Holland}, ADDRESS = {Amsterdam}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {Journal of Pure and Applied Algebra}, }
Endnote
%0 Journal Article %A Ballard, Grey %A Ikenmeyer, Christian %A Landsberg, J. M. %A Ryder, Nick %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T The Geometry of Rank Decompositions of Matrix Multiplication II: 3 x 3 matrices : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AB17-4 %R 10.1016/j.jpaa.2018.10.014 %7 2018 %D 2018 %J Journal of Pure and Applied Algebra %O J. Pure Appl. Algebra %I North-Holland %C Amsterdam %@ false
[33]
F. Ban, V. Bhattiprolu, K. Bringmann, P. Kolev, E. Lee, and D. P. Woodruff, “A PTAS for l p-Low Rank Approximation,” 2018. [Online]. Available: http://arxiv.org/abs/1807.06101. (arXiv: 1807.06101)
Abstract
A number of recent works have studied algorithms for entrywise $\ell_p$-low rank approximation, namely, algorithms which given an $n \times d$ matrix $A$ (with $n \geq d$), output a rank-$k$ matrix $B$ minimizing $\|A-B\|_p^p=\sum_{i,j}|A_{i,j}-B_{i,j}|^p$ when $p > 0$; and $\|A-B\|_0=\sum_{i,j}[A_{i,j}\neq B_{i,j}]$ for $p=0$. On the algorithmic side, for $p \in (0,2)$, we give the first $(1+\epsilon)$-approximation algorithm running in time $n^{\text{poly}(k/\epsilon)}$. Further, for $p = 0$, we give the first almost-linear time approximation scheme for what we call the Generalized Binary $\ell_0$-Rank-$k$ problem. Our algorithm computes $(1+\epsilon)$-approximation in time $(1/\epsilon)^{2^{O(k)}/\epsilon^{2}} \cdot nd^{1+o(1)}$. On the hardness of approximation side, for $p \in (1,2)$, assuming the Small Set Expansion Hypothesis and the Exponential Time Hypothesis (ETH), we show that there exists $\delta := \delta(\alpha) > 0$ such that the entrywise $\ell_p$-Rank-$k$ problem has no $\alpha$-approximation algorithm running in time $2^{k^{\delta}}$.
Export
BibTeX
@online{Ban_arXiv1807.06101, TITLE = {A {PTAS} for $\ell_p$-Low Rank Approximation}, AUTHOR = {Ban, Frank and Bhattiprolu, Vijay and Bringmann, Karl and Kolev, Pavel and Lee, Euiwoong and Woodruff, David P.}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1807.06101}, EPRINT = {1807.06101}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {A number of recent works have studied algorithms for entrywise $\ell_p$-low rank approximation, namely, algorithms which given an $n \times d$ matrix $A$ (with $n \geq d$), output a rank-$k$ matrix $B$ minimizing $\|A-B\|_p^p=\sum_{i,j}|A_{i,j}-B_{i,j}|^p$ when $p > 0$; and $\|A-B\|_0=\sum_{i,j}[A_{i,j}\neq B_{i,j}]$ for $p=0$. On the algorithmic side, for $p \in (0,2)$, we give the first $(1+\epsilon)$-approximation algorithm running in time $n^{\text{poly}(k/\epsilon)}$. Further, for $p = 0$, we give the first almost-linear time approximation scheme for what we call the Generalized Binary $\ell_0$-Rank-$k$ problem. Our algorithm computes $(1+\epsilon)$-approximation in time $(1/\epsilon)^{2^{O(k)}/\epsilon^{2}} \cdot nd^{1+o(1)}$. On the hardness of approximation side, for $p \in (1,2)$, assuming the Small Set Expansion Hypothesis and the Exponential Time Hypothesis (ETH), we show that there exists $\delta := \delta(\alpha) > 0$ such that the entrywise $\ell_p$-Rank-$k$ problem has no $\alpha$-approximation algorithm running in time $2^{k^{\delta}}$.}, }
Endnote
%0 Report %A Ban, Frank %A Bhattiprolu, Vijay %A Bringmann, Karl %A Kolev, Pavel %A Lee, Euiwoong %A Woodruff, David P. %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A PTAS for l p-Low Rank Approximation : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9D17-4 %U http://arxiv.org/abs/1807.06101 %D 2018 %X A number of recent works have studied algorithms for entrywise $\ell_p$-low rank approximation, namely, algorithms which given an $n \times d$ matrix $A$ (with $n \geq d$), output a rank-$k$ matrix $B$ minimizing $\|A-B\|_p^p=\sum_{i,j}|A_{i,j}-B_{i,j}|^p$ when $p > 0$; and $\|A-B\|_0=\sum_{i,j}[A_{i,j}\neq B_{i,j}]$ for $p=0$. On the algorithmic side, for $p \in (0,2)$, we give the first $(1+\epsilon)$-approximation algorithm running in time $n^{\text{poly}(k/\epsilon)}$. Further, for $p = 0$, we give the first almost-linear time approximation scheme for what we call the Generalized Binary $\ell_0$-Rank-$k$ problem. Our algorithm computes $(1+\epsilon)$-approximation in time $(1/\epsilon)^{2^{O(k)}/\epsilon^{2}} \cdot nd^{1+o(1)}$. On the hardness of approximation side, for $p \in (1,2)$, assuming the Small Set Expansion Hypothesis and the Exponential Time Hypothesis (ETH), we show that there exists $\delta := \delta(\alpha) > 0$ such that the entrywise $\ell_p$-Rank-$k$ problem has no $\alpha$-approximation algorithm running in time $2^{k^{\delta}}$. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Computational Complexity, cs.CC,Computer Science, Learning, cs.LG
[34]
L. Becchetti, A. Clementi, P. Manurangsi, E. Natale, F. Pasquale, P. Raghavendra, and L. Trevisan, “Average Whenever You Meet: Opportunistic Protocols for Community Detection,” in 26th Annual European Symposium on Algorithms (ESA 2018), Helsinki, Finland, 2018.
Export
BibTeX
@inproceedings{Becchetti_ESA2018, TITLE = {Average Whenever You Meet: {O}pportunistic Protocols for Community Detection}, AUTHOR = {Becchetti, Luca and Clementi, Andrea and Manurangsi, Pasin and Natale, Emanuele and Pasquale, Francesco and Raghavendra, Prasad and Trevisan, Luca}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-081-1}, URL = {urn:nbn:de:0030-drops-94705}, DOI = {10.4230/LIPIcs.ESA.2018.7}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {26th Annual European Symposium on Algorithms (ESA 2018)}, EDITOR = {Azar, Yossi and Bast, Hannah and Herman, Grzegorz}, PAGES = {1--13}, EID = {7}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {112}, ADDRESS = {Helsinki, Finland}, }
Endnote
%0 Conference Proceedings %A Becchetti, Luca %A Clementi, Andrea %A Manurangsi, Pasin %A Natale, Emanuele %A Pasquale, Francesco %A Raghavendra, Prasad %A Trevisan, Luca %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Average Whenever You Meet: Opportunistic Protocols for Community Detection : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A957-E %R 10.4230/LIPIcs.ESA.2018.7 %U urn:nbn:de:0030-drops-94705 %D 2018 %B 26th Annual European Symposium on Algorithms %Z date of event: 2018-08-20 - 2018-08-22 %C Helsinki, Finland %B 26th Annual European Symposium on Algorithms %E Azar, Yossi; Bast, Hannah; Herman, Grzegorz %P 1 - 13 %Z sequence number: 7 %I Schloss Dagstuhl %@ 978-3-95977-081-1 %B Leibniz International Proceedings in Informatics %N 112 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9470/http://drops.dagstuhl.de/doku/urheberrecht1.html
[35]
L. Becchetti, V. Bonifaci, and E. Natale, “Pooling or Sampling: Collective Dynamics for Electrical Flow Estimation,” in AAMAS’18, 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden, 2018.
Export
BibTeX
@inproceedings{Becchetti_AAMAS2018, TITLE = {Pooling or Sampling: {C}ollective Dynamics for Electrical Flow Estimation}, AUTHOR = {Becchetti, Luca and Bonifaci, Vincenzo and Natale, Emanuele}, LANGUAGE = {eng}, ISBN = {978-1-4503-5649-7}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {AAMAS'18, 17th International Conference on Autonomous Agents and MultiAgent Systems}, PAGES = {1576--1584}, ADDRESS = {Stockholm, Sweden}, }
Endnote
%0 Conference Proceedings %A Becchetti, Luca %A Bonifaci, Vincenzo %A Natale, Emanuele %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Pooling or Sampling: Collective Dynamics for Electrical Flow Estimation : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A953-2 %D 2018 %B 17th International Conference on Autonomous Agents and MultiAgent Systems %Z date of event: 2018-07-10 - 2018-07-15 %C Stockholm, Sweden %B AAMAS'18 %P 1576 - 1584 %I ACM %@ 978-1-4503-5649-7
[36]
R. Becker, M. Sagraloff, V. Sharma, and C. Yap, “A Simple Near-Optimal Subdivision Algorithm for Complex Root Isolation based on the Pellet Test and Newton Iteration,” Journal of Symbolic Computation, vol. 86, 2018.
Export
BibTeX
@article{Becker2017JSC, TITLE = {A Simple Near-Optimal Subdivision Algorithm for Complex Root Isolation based on the {Pellet} Test and {Newton} Iteration}, AUTHOR = {Becker, Ruben and Sagraloff, Michael and Sharma, Vikram and Yap, Chee}, LANGUAGE = {eng}, ISSN = {0747-7171}, DOI = {10.1016/j.jsc.2017.03.009}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Journal of Symbolic Computation}, VOLUME = {86}, PAGES = {51--96}, }
Endnote
%0 Journal Article %A Becker, Ruben %A Sagraloff, Michael %A Sharma, Vikram %A Yap, Chee %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Simple Near-Optimal Subdivision Algorithm for Complex Root Isolation based on the Pellet Test and Newton Iteration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5717-8 %R 10.1016/j.jsc.2017.03.009 %7 2017-03-29 %D 2018 %J Journal of Symbolic Computation %V 86 %& 51 %P 51 - 96 %I Elsevier %C Amsterdam %@ false
[37]
R. Becker, V. Bonifaci, A. Karrenbauer, P. Kolev, and K. Mehlhorn, “Two Results on Slime Mold Computations,” Theoretical Computer Science, 2018.
Abstract
In this paper, we present two results on slime mold computations. The first one treats a biologically-grounded model, originally proposed by biologists analyzing the behavior of the slime mold Physarum polycephalum. This primitive organism was empirically shown by Nakagaki et al. to solve shortest path problems in wet-lab experiments (Nature'00). We show that the proposed simple mathematical model actually generalizes to a much wider class of problems, namely undirected linear programs with a non-negative cost vector. For our second result, we consider the discretization of a biologically-inspired model. This model is a directed variant of the biologically-grounded one and was never claimed to describe the behavior of a biological system. Straszak and Vishnoi showed that it can $\epsilon$-approximately solve flow problems (SODA'16) and even general linear programs with positive cost vector (ITCS'16) within a finite number of steps. We give a refined convergence analysis that improves the dependence on $\epsilon$ from polynomial to logarithmic and simultaneously allows to choose a step size that is independent of $\epsilon$. Furthermore, we show that the dynamics can be initialized with a more general set of (infeasible) starting points.
Export
BibTeX
@article{BBKKM2018, TITLE = {Two Results on Slime Mold Computations}, AUTHOR = {Becker, Ruben and Bonifaci, Vincenzo and Karrenbauer, Andreas and Kolev, Pavel and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISSN = {0304-3975}, DOI = {10.1016/j.tcs.2018.08.027}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In this paper, we present two results on slime mold computations. The first one treats a biologically-grounded model, originally proposed by biologists analyzing the behavior of the slime mold Physarum polycephalum. This primitive organism was empirically shown by Nakagaki et al. to solve shortest path problems in wet-lab experiments (Nature'00). We show that the proposed simple mathematical model actually generalizes to a much wider class of problems, namely undirected linear programs with a non-negative cost vector. For our second result, we consider the discretization of a biologically-inspired model. This model is a directed variant of the biologically-grounded one and was never claimed to describe the behavior of a biological system. Straszak and Vishnoi showed that it can $\epsilon$-approximately solve flow problems (SODA'16) and even general linear programs with positive cost vector (ITCS'16) within a finite number of steps. We give a refined convergence analysis that improves the dependence on $\epsilon$ from polynomial to logarithmic and simultaneously allows to choose a step size that is independent of $\epsilon$. Furthermore, we show that the dynamics can be initialized with a more general set of (infeasible) starting points.}, JOURNAL = {Theoretical Computer Science}, }
Endnote
%0 Journal Article %A Becker, Ruben %A Bonifaci, Vincenzo %A Karrenbauer, Andreas %A Kolev, Pavel %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Two Results on Slime Mold Computations : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A3AE-2 %R 10.1016/j.tcs.2018.08.027 %7 2018 %D 2018 %X In this paper, we present two results on slime mold computations. The first one treats a biologically-grounded model, originally proposed by biologists analyzing the behavior of the slime mold Physarum polycephalum. This primitive organism was empirically shown by Nakagaki et al. to solve shortest path problems in wet-lab experiments (Nature'00). We show that the proposed simple mathematical model actually generalizes to a much wider class of problems, namely undirected linear programs with a non-negative cost vector. For our second result, we consider the discretization of a biologically-inspired model. This model is a directed variant of the biologically-grounded one and was never claimed to describe the behavior of a biological system. Straszak and Vishnoi showed that it can $\epsilon$-approximately solve flow problems (SODA'16) and even general linear programs with positive cost vector (ITCS'16) within a finite number of steps. We give a refined convergence analysis that improves the dependence on $\epsilon$ from polynomial to logarithmic and simultaneously allows to choose a step size that is independent of $\epsilon$. Furthermore, we show that the dynamics can be initialized with a more general set of (infeasible) starting points. %K Computer Science, Data Structures and Algorithms, cs.DS,Mathematics, Dynamical Systems, math.DS,Mathematics, Optimization and Control, math.OC, Physics, Biological Physics, physics.bio-ph %J Theoretical Computer Science %I Elsevier %C Amsterdam %@ false
[38]
A. Bhattacharya, D. Issac, R. Jaiswal, and A. Kumar, “Sampling in Space Restricted Settings,” Algorithmica, vol. 80, no. 5, 2018.
Export
BibTeX
@article{Bhattacharya2018, TITLE = {Sampling in Space Restricted Settings}, AUTHOR = {Bhattacharya, Anup and Issac, Davis and Jaiswal, Ragesh and Kumar, Amit}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-017-0335-z}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Algorithmica}, VOLUME = {80}, NUMBER = {5}, PAGES = {1439--1458}, }
Endnote
%0 Journal Article %A Bhattacharya, Anup %A Issac, Davis %A Jaiswal, Ragesh %A Kumar, Amit %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Sampling in Space Restricted Settings : %G eng %U http://hdl.handle.net/21.11116/0000-0001-2C37-1 %R 10.1007/s00453-017-0335-z %7 2017 %D 2018 %J Algorithmica %V 80 %N 5 %& 1439 %P 1439 - 1458 %I Springer-Verlag %C New York %@ false
[39]
M. Bläser, C. Ikenmeyer, G. Jindal, and V. Lysikov, “Generalized Matrix Completion and Algebraic Natural Proofs,” in STOC’18, 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 2018.
Export
BibTeX
@inproceedings{Blaeser_STOC2018, TITLE = {Generalized Matrix Completion and Algebraic Natural Proofs}, AUTHOR = {Bl{\"a}ser, Markus and Ikenmeyer, Christian and Jindal, Gorav and Lysikov, Vladimir}, LANGUAGE = {eng}, ISBN = {978-1-4503-5559-9}, DOI = {10.1145/3188745.3188832}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {STOC'18, 50th Annual ACM SIGACT Symposium on Theory of Computing}, PAGES = {1193--1206}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Bläser, Markus %A Ikenmeyer, Christian %A Jindal, Gorav %A Lysikov, Vladimir %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Generalized Matrix Completion and Algebraic Natural Proofs : %G eng %U http://hdl.handle.net/21.11116/0000-0002-17DF-A %R 10.1145/3188745.3188832 %D 2018 %B 50th Annual ACM SIGACT Symposium on Theory of Computing %Z date of event: 2018-06-25 - 2017-06-29 %C Los Angeles, CA, USA %B STOC'18 %P 1193 - 1206 %I ACM %@ 978-1-4503-5559-9
[40]
M. Bläser, C. Ikenmeyer, G. Jindal, and V. Lysikov, “Generalized Matrix Completion and Algebraic Natural Proofs Contact Add Comment RSS-Feed,” Electronic Colloquium on Computational Complexity (ECCC): Report Series, vol. 18–064, 2018.
Export
BibTeX
@article{BlaeserCCC18_064, TITLE = {Generalized Matrix Completion and Algebraic Natural Proofs Contact Add Comment {RSS}-Feed}, AUTHOR = {Bl{\"a}ser, Markus and Ikenmeyer, Christian and Jindal, Gorav and Lysikov, Vladimir}, LANGUAGE = {eng}, ISSN = {1433-8092}, PUBLISHER = {Hasso-Plattner-Institut f{\"u}r Softwaretechnik GmbH}, ADDRESS = {Potsdam}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {Electronic Colloquium on Computational Complexity (ECCC): Report Series}, VOLUME = {18-064}, PAGES = {1--27}, }
Endnote
%0 Journal Article %A Bläser, Markus %A Ikenmeyer, Christian %A Jindal, Gorav %A Lysikov, Vladimir %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Generalized Matrix Completion and Algebraic Natural Proofs Contact Add Comment RSS-Feed : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3F5F-0 %7 2018 %D 2018 %J Electronic Colloquium on Computational Complexity (ECCC): Report Series %V 18-064 %& 1 %P 1 - 27 %I Hasso-Plattner-Institut für Softwaretechnik GmbH %C Potsdam %@ false %U https://eccc.weizmann.ac.il/report/2018/064/
[41]
L. Boczkowski, E. Natale, O. Feinerman, and A. Korman, “Limits on Reliable Information Flows through Stochastic Populations,” PLoS Computational Biology, vol. 14, no. 6, 2018.
Export
BibTeX
@article{Boczkowski2018, TITLE = {Limits on Reliable Information Flows through Stochastic Populations}, AUTHOR = {Boczkowski, Lucas and Natale, Emanuele and Feinerman, Ofer and Korman, Amos}, LANGUAGE = {eng}, ISSN = {1553-734X}, DOI = {10.1371/journal.pcbi.1006195}, PUBLISHER = {Public Library of Science}, ADDRESS = {San Francisco, CA}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {PLoS Computational Biology}, VOLUME = {14}, NUMBER = {6}, EID = {e1006195}, }
Endnote
%0 Journal Article %A Boczkowski, Lucas %A Natale, Emanuele %A Feinerman, Ofer %A Korman, Amos %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Limits on Reliable Information Flows through Stochastic Populations : %G eng %U http://hdl.handle.net/21.11116/0000-0001-999D-2 %R 10.1371/journal.pcbi.1006195 %7 2018 %D 2018 %J PLoS Computational Biology %V 14 %N 6 %Z sequence number: e1006195 %I Public Library of Science %C San Francisco, CA %@ false
[42]
L. Boczkowski, O. Feinerman, A. Korman, and E. Natale, “Limits for Rumor Spreading in Stochastic Populations,” in 9th Innovations in Theoretical Computer Science (ITCS 2018), Cambridge, MA, USA, 2018.
Export
BibTeX
@inproceedings{Boczkowski_ITCS2018, TITLE = {Limits for Rumor Spreading in Stochastic Populations}, AUTHOR = {Boczkowski, Lucas and Feinerman, Ofer and Korman, Amos and Natale, Emanuele}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-060-6}, URL = {urn:nbn:de:0030-drops-83207}, DOI = {10.4230/LIPIcs.ITCS.2018.49}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {9th Innovations in Theoretical Computer Science (ITCS 2018)}, EDITOR = {Karlin, Anna R.}, PAGES = {1--21}, EID = {49}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {94}, ADDRESS = {Cambridge, MA, USA}, }
Endnote
%0 Conference Proceedings %A Boczkowski, Lucas %A Feinerman, Ofer %A Korman, Amos %A Natale, Emanuele %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Limits for Rumor Spreading in Stochastic Populations : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A962-1 %R 10.4230/LIPIcs.ITCS.2018.49 %U urn:nbn:de:0030-drops-83207 %D 2018 %B 9th Innovations in Theoretical Computer Science %Z date of event: 2018-01-11 - 2018-01-14 %C Cambridge, MA, USA %B 9th Innovations in Theoretical Computer Science %E Karlin, Anna R. %P 1 - 21 %Z sequence number: 49 %I Schloss Dagstuhl %@ 978-3-95977-060-6 %B Leibniz International Proceedings in Informatics %N 94 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/8320/http://drops.dagstuhl.de/opus/volltexte/2018/8320/
[43]
J.-D. Boissonnat, R. Dyer, and A. Ghosh, “Delaunay Triangulation of Manifolds,” Foundations of Computational Mathematics, vol. 18, no. 2, 2018.
Export
BibTeX
@article{Boissonnat2017, TITLE = {Delaunay Triangulation of Manifolds}, AUTHOR = {Boissonnat, Jean-Daniel and Dyer, Ramsay and Ghosh, Arijit}, LANGUAGE = {eng}, ISSN = {1615-3375}, DOI = {10.1007/s10208-017-9344-1}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Foundations of Computational Mathematics}, VOLUME = {18}, NUMBER = {2}, PAGES = {399--431}, }
Endnote
%0 Journal Article %A Boissonnat, Jean-Daniel %A Dyer, Ramsay %A Ghosh, Arijit %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Delaunay Triangulation of Manifolds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-7945-0 %R 10.1007/s10208-017-9344-1 %7 2017-02-01 %D 2018 %J Foundations of Computational Mathematics %V 18 %N 2 %& 399 %P 399 - 431 %I Springer %C New York, NY %@ false
[44]
K. Bringmann and P. Wellnitz, “Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars,” 2018. [Online]. Available: http://arxiv.org/abs/1803.00804. (arXiv: 1803.00804)
Abstract
Tree-adjoining grammars are a generalization of context-free grammars that are well suited to model human languages and are thus popular in computational linguistics. In the tree-adjoining grammar recognition problem, given a grammar $\Gamma$ and a string $s$ of length $n$, the task is to decide whether $s$ can be obtained from $\Gamma$. Rajasekaran and Yooseph's parser (JCSS'98) solves this problem in time $O(n^{2\omega})$, where $\omega < 2.373$ is the matrix multiplication exponent. The best algorithms avoiding fast matrix multiplication take time $O(n^6)$. The first evidence for hardness was given by Satta (J. Comp. Linguist.'94): For a more general parsing problem, any algorithm that avoids fast matrix multiplication and is significantly faster than $O(|\Gamma| n^6)$ in the case of $|\Gamma| = \Theta(n^{12})$ would imply a breakthrough for Boolean matrix multiplication. Following an approach by Abboud et al. (FOCS'15) for context-free grammar recognition, in this paper we resolve many of the disadvantages of the previous lower bound. We show that, even on constant-size grammars, any improvement on Rajasekaran and Yooseph's parser would imply a breakthrough for the $k$-Clique problem. This establishes tree-adjoining grammar parsing as a practically relevant problem with the unusual running time of $n^{2\omega}$, up to lower order factors.
Export
BibTeX
@online{Bringmann_arXiv1803.00804, TITLE = {Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars}, AUTHOR = {Bringmann, Karl and Wellnitz, Philip}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1803.00804}, EPRINT = {1803.00804}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Tree-adjoining grammars are a generalization of context-free grammars that are well suited to model human languages and are thus popular in computational linguistics. In the tree-adjoining grammar recognition problem, given a grammar $\Gamma$ and a string $s$ of length $n$, the task is to decide whether $s$ can be obtained from $\Gamma$. Rajasekaran and Yooseph's parser (JCSS'98) solves this problem in time $O(n^{2\omega})$, where $\omega < 2.373$ is the matrix multiplication exponent. The best algorithms avoiding fast matrix multiplication take time $O(n^6)$. The first evidence for hardness was given by Satta (J. Comp. Linguist.'94): For a more general parsing problem, any algorithm that avoids fast matrix multiplication and is significantly faster than $O(|\Gamma| n^6)$ in the case of $|\Gamma| = \Theta(n^{12})$ would imply a breakthrough for Boolean matrix multiplication. Following an approach by Abboud et al. (FOCS'15) for context-free grammar recognition, in this paper we resolve many of the disadvantages of the previous lower bound. We show that, even on constant-size grammars, any improvement on Rajasekaran and Yooseph's parser would imply a breakthrough for the $k$-Clique problem. This establishes tree-adjoining grammar parsing as a practically relevant problem with the unusual running time of $n^{2\omega}$, up to lower order factors.}, }
Endnote
%0 Report %A Bringmann, Karl %A Wellnitz, Philip %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3E2A-C %U http://arxiv.org/abs/1803.00804 %D 2018 %X Tree-adjoining grammars are a generalization of context-free grammars that are well suited to model human languages and are thus popular in computational linguistics. In the tree-adjoining grammar recognition problem, given a grammar $\Gamma$ and a string $s$ of length $n$, the task is to decide whether $s$ can be obtained from $\Gamma$. Rajasekaran and Yooseph's parser (JCSS'98) solves this problem in time $O(n^{2\omega})$, where $\omega < 2.373$ is the matrix multiplication exponent. The best algorithms avoiding fast matrix multiplication take time $O(n^6)$. The first evidence for hardness was given by Satta (J. Comp. Linguist.'94): For a more general parsing problem, any algorithm that avoids fast matrix multiplication and is significantly faster than $O(|\Gamma| n^6)$ in the case of $|\Gamma| = \Theta(n^{12})$ would imply a breakthrough for Boolean matrix multiplication. Following an approach by Abboud et al. (FOCS'15) for context-free grammar recognition, in this paper we resolve many of the disadvantages of the previous lower bound. We show that, even on constant-size grammars, any improvement on Rajasekaran and Yooseph's parser would imply a breakthrough for the $k$-Clique problem. This establishes tree-adjoining grammar parsing as a practically relevant problem with the unusual running time of $n^{2\omega}$, up to lower order factors. %K Computer Science, Computational Complexity, cs.CC,Computer Science, Data Structures and Algorithms, cs.DS
[45]
K. Bringmann, S. Cabello, and M. T. M. Emmerich, “Maximum Volume Subset Selection for Anchored Boxes,” 2018. [Online]. Available: http://arxiv.org/abs/1803.00849. (arXiv: 1803.00849)
Abstract
Let $B$ be a set of $n$ axis-parallel boxes in $\mathbb{R}^d$ such that each box has a corner at the origin and the other corner in the positive quadrant of $\mathbb{R}^d$, and let $k$ be a positive integer. We study the problem of selecting $k$ boxes in $B$ that maximize the volume of the union of the selected boxes. This research is motivated by applications in skyline queries for databases and in multicriteria optimization, where the problem is known as the hypervolume subset selection problem. It is known that the problem can be solved in polynomial time in the plane, while the best known running time in any dimension $d \ge 3$ is $\Omega\big(\binom{n}{k}\big)$. We show that: - The problem is NP-hard already in 3 dimensions. - In 3 dimensions, we break the bound $\Omega\big(\binom{n}{k}\big)$, by providing an $n^{O(\sqrt{k})}$ algorithm. - For any constant dimension $d$, we present an efficient polynomial-time approximation scheme.
Export
BibTeX
@online{Bringmann_arXiv1803.00849, TITLE = {Maximum Volume Subset Selection for Anchored Boxes}, AUTHOR = {Bringmann, Karl and Cabello, Sergio and Emmerich, Michael T. M.}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1803.00849}, EPRINT = {1803.00849}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Let $B$ be a set of $n$ axis-parallel boxes in $\mathbb{R}^d$ such that each box has a corner at the origin and the other corner in the positive quadrant of $\mathbb{R}^d$, and let $k$ be a positive integer. We study the problem of selecting $k$ boxes in $B$ that maximize the volume of the union of the selected boxes. This research is motivated by applications in skyline queries for databases and in multicriteria optimization, where the problem is known as the hypervolume subset selection problem. It is known that the problem can be solved in polynomial time in the plane, while the best known running time in any dimension $d \ge 3$ is $\Omega\big(\binom{n}{k}\big)$. We show that: -- The problem is NP-hard already in 3 dimensions. -- In 3 dimensions, we break the bound $\Omega\big(\binom{n}{k}\big)$, by providing an $n^{O(\sqrt{k})}$ algorithm. -- For any constant dimension $d$, we present an efficient polynomial-time approximation scheme.}, }
Endnote
%0 Report %A Bringmann, Karl %A Cabello, Sergio %A Emmerich, Michael T. M. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Maximum Volume Subset Selection for Anchored Boxes : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3E08-2 %U http://arxiv.org/abs/1803.00849 %D 2018 %X Let $B$ be a set of $n$ axis-parallel boxes in $\mathbb{R}^d$ such that each box has a corner at the origin and the other corner in the positive quadrant of $\mathbb{R}^d$, and let $k$ be a positive integer. We study the problem of selecting $k$ boxes in $B$ that maximize the volume of the union of the selected boxes. This research is motivated by applications in skyline queries for databases and in multicriteria optimization, where the problem is known as the hypervolume subset selection problem. It is known that the problem can be solved in polynomial time in the plane, while the best known running time in any dimension $d \ge 3$ is $\Omega\big(\binom{n}{k}\big)$. We show that: - The problem is NP-hard already in 3 dimensions. - In 3 dimensions, we break the bound $\Omega\big(\binom{n}{k}\big)$, by providing an $n^{O(\sqrt{k})}$ algorithm. - For any constant dimension $d$, we present an efficient polynomial-time approximation scheme. %K Computer Science, Computational Geometry, cs.CG,Computer Science, Data Structures and Algorithms, cs.DS
[46]
K. Bringmann, T. Husfeldt, and M. Magnusson, “Multivariate Analysis of Orthogonal Range Searching and Graph Distances Parameterized by Treewidth,” 2018. [Online]. Available: http://arxiv.org/abs/1805.07135. (arXiv: 1805.07135)
Abstract
We show that the eccentricities, diameter, radius, and Wiener index of an undirected $n$-vertex graph with nonnegative edge lengths can be computed in time $O(n\cdot \binom{k+\lceil\log n\rceil}{k} \cdot 2^k k^2 \log n)$, where $k$ is the treewidth of the graph. For every $\epsilon>0$, this bound is $n^{1+\epsilon}\exp O(k)$, which matches a hardness result of Abboud, Vassilevska Williams, and Wang (SODA 2015) and closes an open problem in the multivariate analysis of polynomial-time computation. To this end, we show that the analysis of an algorithm of Cabello and Knauer (Comp. Geom., 2009) in the regime of non-constant treewidth can be improved by revisiting the analysis of orthogonal range searching, improving bounds of the form $\log^d n$ to $\binom{d+\lceil\log n\rceil}{d}$, as originally observed by Monier (J. Alg. 1980). We also investigate the parameterization by vertex cover number.
Export
BibTeX
@online{Bringmann_arXiv1805.07135, TITLE = {Multivariate Analysis of Orthogonal Range Searching and Graph Distances Parameterized by Treewidth}, AUTHOR = {Bringmann, Karl and Husfeldt, Thore and Magnusson, M{\aa}ns}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1805.07135}, EPRINT = {1805.07135}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We show that the eccentricities, diameter, radius, and Wiener index of an undirected $n$-vertex graph with nonnegative edge lengths can be computed in time $O(n\cdot \binom{k+\lceil\log n\rceil}{k} \cdot 2^k k^2 \log n)$, where $k$ is the treewidth of the graph. For every $\epsilon>0$, this bound is $n^{1+\epsilon}\exp O(k)$, which matches a hardness result of Abboud, Vassilevska Williams, and Wang (SODA 2015) and closes an open problem in the multivariate analysis of polynomial-time computation. To this end, we show that the analysis of an algorithm of Cabello and Knauer (Comp. Geom., 2009) in the regime of non-constant treewidth can be improved by revisiting the analysis of orthogonal range searching, improving bounds of the form $\log^d n$ to $\binom{d+\lceil\log n\rceil}{d}$, as originally observed by Monier (J. Alg. 1980). We also investigate the parameterization by vertex cover number.}, }
Endnote
%0 Report %A Bringmann, Karl %A Husfeldt, Thore %A Magnusson, M&#229;ns %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Multivariate Analysis of Orthogonal Range Searching and Graph Distances Parameterized by Treewidth : %G eng %U http://hdl.handle.net/21.11116/0000-0002-173B-3 %U http://arxiv.org/abs/1805.07135 %D 2018 %X We show that the eccentricities, diameter, radius, and Wiener index of an undirected $n$-vertex graph with nonnegative edge lengths can be computed in time $O(n\cdot \binom{k+\lceil\log n\rceil}{k} \cdot 2^k k^2 \log n)$, where $k$ is the treewidth of the graph. For every $\epsilon>0$, this bound is $n^{1+\epsilon}\exp O(k)$, which matches a hardness result of Abboud, Vassilevska Williams, and Wang (SODA 2015) and closes an open problem in the multivariate analysis of polynomial-time computation. To this end, we show that the analysis of an algorithm of Cabello and Knauer (Comp. Geom., 2009) in the regime of non-constant treewidth can be improved by revisiting the analysis of orthogonal range searching, improving bounds of the form $\log^d n$ to $\binom{d+\lceil\log n\rceil}{d}$, as originally observed by Monier (J. Alg. 1980). We also investigate the parameterization by vertex cover number. %K Computer Science, Data Structures and Algorithms, cs.DS
[47]
K. Bringmann and M. Künnemann, “Multivariate Fine-Grained Complexity of Longest Common Subsequence,” 2018. [Online]. Available: http://arxiv.org/abs/1803.00938. (arXiv: 1803.00938)
Abstract
We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings $x$ and $y$ of length $n$, a textbook algorithm solves LCS in time $O(n^2)$, but although much effort has been spent, no $O(n^{2-\varepsilon})$-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size $n:=\max\{|x|,|y|\}$, the length of the shorter string $m:=\min\{|x|,|y|\}$, the length $L$ of an LCS of $x$ and $y$, the numbers of deletions $\delta := m-L$ and $\Delta := n-L$, the alphabet size, as well as the numbers of matching pairs $M$ and dominant pairs $d$. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as $(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}$. [...]
Export
BibTeX
@online{Bringmann_arXiv1803.00938, TITLE = {Multivariate Fine-Grained Complexity of Longest Common Subsequence}, AUTHOR = {Bringmann, Karl and K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1803.00938}, EPRINT = {1803.00938}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings $x$ and $y$ of length $n$, a textbook algorithm solves LCS in time $O(n^2)$, but although much effort has been spent, no $O(n^{2-\varepsilon})$-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size $n:=\max\{|x|,|y|\}$, the length of the shorter string $m:=\min\{|x|,|y|\}$, the length $L$ of an LCS of $x$ and $y$, the numbers of deletions $\delta := m-L$ and $\Delta := n-L$, the alphabet size, as well as the numbers of matching pairs $M$ and dominant pairs $d$. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as $(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}$. [...]}, }
Endnote
%0 Report %A Bringmann, Karl %A K&#252;nnemann, Marvin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Multivariate Fine-Grained Complexity of Longest Common Subsequence : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3E02-8 %U http://arxiv.org/abs/1803.00938 %D 2018 %X We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings $x$ and $y$ of length $n$, a textbook algorithm solves LCS in time $O(n^2)$, but although much effort has been spent, no $O(n^{2-\varepsilon})$-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size $n:=\max\{|x|,|y|\}$, the length of the shorter string $m:=\min\{|x|,|y|\}$, the length $L$ of an LCS of $x$ and $y$, the numbers of deletions $\delta := m-L$ and $\Delta := n-L$, the alphabet size, as well as the numbers of matching pairs $M$ and dominant pairs $d$. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as $(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}$. [...] %K Computer Science, Computational Complexity, cs.CC,Computer Science, Data Structures and Algorithms, cs.DS
[48]
K. Bringmann and M. Künnemann, “Multivariate Fine-Grained Complexity of Longest Common Subsequence,” in SODA’18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 2018.
Export
BibTeX
@inproceedings{Bringmann_SODA18, TITLE = {Multivariate Fine-Grained Complexity of Longest Common Subsequence}, AUTHOR = {Bringmann, Karl and K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, ISBN = {978-1-61197-503-1}, DOI = {10.1137/1.9781611975031.79}, PUBLISHER = {SIAM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {SODA'18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms}, EDITOR = {Czumaj, Artur}, PAGES = {1216--1235}, ADDRESS = {New Orleans, LA, USA}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A K&#252;nnemann, Marvin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Multivariate Fine-Grained Complexity of Longest Common Subsequence : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F0E-C %R 10.1137/1.9781611975031.79 %D 2018 %B Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2018-01-07 - 2018-01-10 %C New Orleans, LA, USA %B SODA'18 %E Czumaj, Artur %P 1216 - 1235 %I SIAM %@ 978-1-61197-503-1
[49]
K. Bringmann, T. Friedrich, and A. Krohmer, “De-anonymization of Heterogeneous Random Graphs in Quasilinear Time,” Algorithmica, vol. 80, no. 11, 2018.
Export
BibTeX
@article{bringmann_deanonymization_2018, TITLE = {De-anonymization of Heterogeneous Random Graphs in Quasilinear Time}, AUTHOR = {Bringmann, Karl and Friedrich, Tobias and Krohmer, Anton}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-017-0395-0}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Algorithmica}, VOLUME = {80}, NUMBER = {11}, PAGES = {3397--3427}, }
Endnote
%0 Journal Article %A Bringmann, Karl %A Friedrich, Tobias %A Krohmer, Anton %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T De-anonymization of Heterogeneous Random Graphs in Quasilinear Time : %G eng %U http://hdl.handle.net/21.11116/0000-0001-F6A3-1 %R 10.1007/s00453-017-0395-0 %7 2017-11-15 %D 2018 %J Algorithmica %V 80 %N 11 %& 3397 %P 3397 - 3427 %I Springer-Verlag %C New York, NY %@ false
[50]
K. Bringmann, P. Gawrychowski, S. Mozes, and O. Weimann, “Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can),” in SODA’18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 2018.
Export
BibTeX
@inproceedings{Bringmann_SODA18b, TITLE = {Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless {APSP} can)}, AUTHOR = {Bringmann, Karl and Gawrychowski, Pawe{\l} and Mozes, Shay and Weimann, Oren}, LANGUAGE = {eng}, ISBN = {978-1-61197-503-1}, DOI = {10.1137/1.9781611975031.77}, PUBLISHER = {SIAM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {SODA'18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms}, EDITOR = {Czumaj, Artur}, PAGES = {1190--1206}, ADDRESS = {New Orleans, LA, USA}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Gawrychowski, Pawe&#322; %A Mozes, Shay %A Weimann, Oren %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can) : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F13-5 %R 10.1137/1.9781611975031.77 %D 2018 %B Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2018-01-07 - 2018-01-10 %C New Orleans, LA, USA %B SODA'18 %E Czumaj, Artur %P 1190 - 1206 %I SIAM %@ 978-1-61197-503-1
[51]
K. Bringmann and S. Krinninger, “A Note on Hardness of Diameter Approximation,” Information Processing Letters, vol. 133, 2018.
Export
BibTeX
@article{Bringmann2018, TITLE = {A Note on Hardness of Diameter Approximation}, AUTHOR = {Bringmann, Karl and Krinninger, Sebastian}, LANGUAGE = {eng}, ISSN = {0020-0190}, DOI = {10.1016/j.ipl.2017.12.010}, PUBLISHER = {Elsevier}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Information Processing Letters}, VOLUME = {133}, PAGES = {10--15}, }
Endnote
%0 Journal Article %A Bringmann, Karl %A Krinninger, Sebastian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T A Note on Hardness of Diameter Approximation : %G eng %U http://hdl.handle.net/21.11116/0000-0001-2C44-2 %R 10.1016/j.ipl.2017.12.010 %7 2018 %D 2018 %J Information Processing Letters %V 133 %& 10 %P 10 - 15 %I Elsevier %@ false
[52]
K. Bringmann, C. Ikenmeyer, and J. Zuiddam, “On Algebraic Branching Programs of Small Width,” Journal of the ACM, vol. 65, no. 5, 2018.
Export
BibTeX
@article{Bringmann_JACM2018, TITLE = {On Algebraic Branching Programs of Small Width}, AUTHOR = {Bringmann, Karl and Ikenmeyer, Christian and Zuiddam, Jeroen}, LANGUAGE = {eng}, ISSN = {0004-5411}, DOI = {10.1145/3209663}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Journal of the ACM}, VOLUME = {65}, NUMBER = {5}, PAGES = {1--29}, EID = {32}, }
Endnote
%0 Journal Article %A Bringmann, Karl %A Ikenmeyer, Christian %A Zuiddam, Jeroen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Algebraic Branching Programs of Small Width : %G eng %U http://hdl.handle.net/21.11116/0000-0002-1B53-3 %R 10.1145/3209663 %7 2018 %D 2018 %J Journal of the ACM %V 65 %N 5 %& 1 %P 1 - 29 %Z sequence number: 32 %I ACM %C New York, NY %@ false
[53]
K. Bringmann, P. Kolev, and D. Woodruff, “Approximation Algorithms for l_0-Low Rank Approximation,” in Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, 2018.
Export
BibTeX
@inproceedings{NIPS2018_7242, TITLE = {Approximation Algorithms for $\ell_0$-Low Rank Approximation}, AUTHOR = {Bringmann, Karl and Kolev, Pavel and Woodruff, David}, LANGUAGE = {eng}, PUBLISHER = {Curran Associates}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Advances in Neural Information Processing Systems 30}, EDITOR = {Guyon, I. and Luxburg, U. V. and Bengio, S. and Wallach, H. and Fergus, R. and Vishwanathan, S. and Garnett, R.}, PAGES = {6648--6659}, ADDRESS = {Long Beach, CA, USA}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Kolev, Pavel %A Woodruff, David %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Approximation Algorithms for l_0-Low Rank Approximation : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9CF9-6 %D 2018 %B Thirty-first Conference on Neural Information Processing Systems %Z date of event: 2017-12-04 - 2017-12-09 %C Long Beach, CA, USA %B Advances in Neural Information Processing Systems 30 %E Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; Garnett, R. %P 6648 - 6659 %I Curran Associates
[54]
K. Bringmann and B. Ray Chaudhury, “Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS,” 2018. [Online]. Available: http://arxiv.org/abs/1810.01238. (arXiv: 1810.01238)
Abstract
We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size $|\Sigma|$. For the problem of deciding whether the LCS of strings $x,y$ has length at least $L$, we obtain a sketch size and streaming space usage of $\mathcal{O}(L^{|\Sigma| - 1} \log L)$. We also prove matching unconditional lower bounds. As an application, we study a variant of LCS where each alphabet symbol is equipped with a weight that is given as input, and the task is to compute a common subsequence of maximum total weight. Using our sketching algorithm, we obtain an $\mathcal{O}(\textrm{min}\{nm, n + m^{{\lvert \Sigma \rvert}}\})$-time algorithm for this problem, on strings $x,y$ of length $n,m$, with $n \ge m$. We prove optimality of this running time up to lower order factors, assuming the Strong Exponential Time Hypothesis.
Export
BibTeX
@online{Bringmann_arXiv1810.01238, TITLE = {Sketching, Streaming, and Fine-Grained Complexity of (Weighted) {LCS}}, AUTHOR = {Bringmann, Karl and Ray Chaudhury, Bhaskar}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1810.01238}, EPRINT = {1810.01238}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size $|\Sigma|$. For the problem of deciding whether the LCS of strings $x,y$ has length at least $L$, we obtain a sketch size and streaming space usage of $\mathcal{O}(L^{|\Sigma| - 1} \log L)$. We also prove matching unconditional lower bounds. As an application, we study a variant of LCS where each alphabet symbol is equipped with a weight that is given as input, and the task is to compute a common subsequence of maximum total weight. Using our sketching algorithm, we obtain an $\mathcal{O}(\textrm{min}\{nm, n + m^{{\lvert \Sigma \rvert}}\})$-time algorithm for this problem, on strings $x,y$ of length $n,m$, with $n \ge m$. We prove optimality of this running time up to lower order factors, assuming the Strong Exponential Time Hypothesis.}, }
Endnote
%0 Report %A Bringmann, Karl %A Ray Chaudhury, Bhaskar %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS : %G eng %U http://hdl.handle.net/21.11116/0000-0002-57B9-C %U http://arxiv.org/abs/1810.01238 %D 2018 %X We study sketching and streaming algorithms for the Longest Common Subsequence problem (LCS) on strings of small alphabet size $|\Sigma|$. For the problem of deciding whether the LCS of strings $x,y$ has length at least $L$, we obtain a sketch size and streaming space usage of $\mathcal{O}(L^{|\Sigma| - 1} \log L)$. We also prove matching unconditional lower bounds. As an application, we study a variant of LCS where each alphabet symbol is equipped with a weight that is given as input, and the task is to compute a common subsequence of maximum total weight. Using our sketching algorithm, we obtain an $\mathcal{O}(\textrm{min}\{nm, n + m^{{\lvert \Sigma \rvert}}\})$-time algorithm for this problem, on strings $x,y$ of length $n,m$, with $n \ge m$. We prove optimality of this running time up to lower order factors, assuming the Strong Exponential Time Hypothesis. %K Computer Science, Data Structures and Algorithms, cs.DS,
[55]
K. Bringmann, T. Husfeldt, and M. Magnusson, “Multivariate Analysis of Orthogonal Range Searching and Graph Distances Parameterized by Treewidth,” in 13th International Symposium on Parameterized and Exact Computation (IPEC 2018), Helsinki, Finland. (Accepted/in press)
Export
BibTeX
@inproceedings{Bringmann_IPEC2018, TITLE = {Multivariate Analysis of Orthogonal Range Searching and Graph Distances Parameterized by Treewidth}, AUTHOR = {Bringmann, Karl and Husfeldt, Thore and Magnusson, M{\aa}ns}, LANGUAGE = {eng}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {13th International Symposium on Parameterized and Exact Computation (IPEC 2018)}, SERIES = {Leibniz International Proceedings in Informatics}, ADDRESS = {Helsinki, Finland}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Husfeldt, Thore %A Magnusson, M&#229;ns %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Multivariate Analysis of Orthogonal Range Searching and Graph Distances Parameterized by Treewidth : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9CFE-1 %D 2018 %B 13th International Symposium on Parameterized and Exact Computation %Z date of event: 2018-08-20 - 2018-08-24 %C Helsinki, Finland %B 13th International Symposium on Parameterized and Exact Computation %I Schloss Dagstuhl %B Leibniz International Proceedings in Informatics
[56]
K. Bringmann and B. Ray Chaudhury, “Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS,” in 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018), Ahmedabad, India, 2018.
Export
BibTeX
@inproceedings{Bringmann_FSTTCS2018, TITLE = {Sketching, Streaming, and Fine-Grained Complexity of (Weighted) {LCS}}, AUTHOR = {Bringmann, Karl and Ray Chaudhury, Bhaskar}, LANGUAGE = {eng}, ISSN = {1868-896}, ISBN = {978-3-95977-093-4}, URL = {urn:nbn:de:0030-drops-99390}, DOI = {10.4230/LIPIcs.FSTTCS.2018.40}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)}, EDITOR = {Ganguly, Sumit and Pandya, Paritosh}, PAGES = {1--16}, EID = {40}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {122}, ADDRESS = {Ahmedabad, India}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Ray Chaudhury, Bhaskar %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sketching, Streaming, and Fine-Grained Complexity of (Weighted) LCS : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9D0B-2 %R 10.4230/LIPIcs.FSTTCS.2018.40 %U urn:nbn:de:0030-drops-99390 %D 2018 %B 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science %Z date of event: 2018-12-11 - 2018-12-13 %C Ahmedabad, India %B 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science %E Ganguly, Sumit; Pandya, Paritosh %P 1 - 16 %Z sequence number: 40 %I Schloss Dagstuhl %@ 978-3-95977-093-4 %B Leibniz International Proceedings in Informatics %N 122 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9939/http://drops.dagstuhl.de/doku/urheberrecht1.html
[57]
K. Bringmann, M. Künnemann, and A. Nusser, “Fréchet Distance Under Translation: Conditional Hardness and an Algorithm via Offline Dynamic Grid Reachability,” 2018. [Online]. Available: http://arxiv.org/abs/1810.10982. (arXiv: 1810.10982)
Abstract
The discrete Fr\'echet distance is a popular measure for comparing polygonal curves. An important variant is the discrete Fr\'echet distance under translation, which enables detection of similar movement patterns in different spatial domains. For polygonal curves of length $n$ in the plane, the fastest known algorithm runs in time $\tilde{\cal O}(n^{5})$ [Ben Avraham, Kaplan, Sharir '15]. This is achieved by constructing an arrangement of disks of size ${\cal O}(n^{4})$, and then traversing its faces while updating reachability in a directed grid graph of size $N := {\cal O}(n^2)$, which can be done in time $\tilde{\cal O}(\sqrt{N})$ per update [Diks, Sankowski '07]. The contribution of this paper is two-fold. First, although it is an open problem to solve dynamic reachability in directed grid graphs faster than $\tilde{\cal O}(\sqrt{N})$, we improve this part of the algorithm: We observe that an offline variant of dynamic $s$-$t$-reachability in directed grid graphs suffices, and we solve this variant in amortized time $\tilde{\cal O}(N^{1/3})$ per update, resulting in an improved running time of $\tilde{\cal O}(n^{4.66...})$ for the discrete Fr\'echet distance under translation. Second, we provide evidence that constructing the arrangement of size ${\cal O}(n^{4})$ is necessary in the worst case, by proving a conditional lower bound of $n^{4 - o(1)}$ on the running time for the discrete Fr\'echet distance under translation, assuming the Strong Exponential Time Hypothesis.
Export
BibTeX
@online{Bringmann_arXiv1810.10982, TITLE = {Fr{\'e}chet Distance Under Translation: Conditional Hardness and an Algorithm via Offline Dynamic Grid Reachability}, AUTHOR = {Bringmann, Karl and K{\"u}nnemann, Marvin and Nusser, Andr{\'e}}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1810.10982}, EPRINT = {1810.10982}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The discrete Fr\'echet distance is a popular measure for comparing polygonal curves. An important variant is the discrete Fr\'echet distance under translation, which enables detection of similar movement patterns in different spatial domains. For polygonal curves of length $n$ in the plane, the fastest known algorithm runs in time $\tilde{\cal O}(n^{5})$ [Ben Avraham, Kaplan, Sharir '15]. This is achieved by constructing an arrangement of disks of size ${\cal O}(n^{4})$, and then traversing its faces while updating reachability in a directed grid graph of size $N := {\cal O}(n^2)$, which can be done in time $\tilde{\cal O}(\sqrt{N})$ per update [Diks, Sankowski '07]. The contribution of this paper is two-fold. First, although it is an open problem to solve dynamic reachability in directed grid graphs faster than $\tilde{\cal O}(\sqrt{N})$, we improve this part of the algorithm: We observe that an offline variant of dynamic $s$-$t$-reachability in directed grid graphs suffices, and we solve this variant in amortized time $\tilde{\cal O}(N^{1/3})$ per update, resulting in an improved running time of $\tilde{\cal O}(n^{4.66...})$ for the discrete Fr\'echet distance under translation. Second, we provide evidence that constructing the arrangement of size ${\cal O}(n^{4})$ is necessary in the worst case, by proving a conditional lower bound of $n^{4 -- o(1)}$ on the running time for the discrete Fr\'echet distance under translation, assuming the Strong Exponential Time Hypothesis.}, }
Endnote
%0 Report %A Bringmann, Karl %A K&#252;nnemann, Marvin %A Nusser, Andr&#233; %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fr&#233;chet Distance Under Translation: Conditional Hardness and an Algorithm via Offline Dynamic Grid Reachability : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E35-1 %U http://arxiv.org/abs/1810.10982 %D 2018 %X The discrete Fr\'echet distance is a popular measure for comparing polygonal curves. An important variant is the discrete Fr\'echet distance under translation, which enables detection of similar movement patterns in different spatial domains. For polygonal curves of length $n$ in the plane, the fastest known algorithm runs in time $\tilde{\cal O}(n^{5})$ [Ben Avraham, Kaplan, Sharir '15]. This is achieved by constructing an arrangement of disks of size ${\cal O}(n^{4})$, and then traversing its faces while updating reachability in a directed grid graph of size $N := {\cal O}(n^2)$, which can be done in time $\tilde{\cal O}(\sqrt{N})$ per update [Diks, Sankowski '07]. The contribution of this paper is two-fold. First, although it is an open problem to solve dynamic reachability in directed grid graphs faster than $\tilde{\cal O}(\sqrt{N})$, we improve this part of the algorithm: We observe that an offline variant of dynamic $s$-$t$-reachability in directed grid graphs suffices, and we solve this variant in amortized time $\tilde{\cal O}(N^{1/3})$ per update, resulting in an improved running time of $\tilde{\cal O}(n^{4.66...})$ for the discrete Fr\'echet distance under translation. Second, we provide evidence that constructing the arrangement of size ${\cal O}(n^{4})$ is necessary in the worst case, by proving a conditional lower bound of $n^{4 - o(1)}$ on the running time for the discrete Fr\'echet distance under translation, assuming the Strong Exponential Time Hypothesis. %K Computer Science, Data Structures and Algorithms, cs.DS
[58]
J. Bund, C. Lenzen, and M. Medina, “Optimal Metastability-containing Sorting Networks,” in Proceedings of the 2018 Design, Automation & Test in Europe (DATE 2018), Dresden, Germany, 2018.
Export
BibTeX
@inproceedings{Bund_DATE2018, TITLE = {Optimal Metastability-containing Sorting Networks}, AUTHOR = {Bund, Johannes and Lenzen, Christoph and Medina, Moti}, LANGUAGE = {eng}, ISBN = {978-3-9819263-1-6}, DOI = {10.23919/DATE.2018.8342063}, PUBLISHER = {IEEE}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {Proceedings of the 2018 Design, Automation \& Test in Europe (DATE 2018)}, PAGES = {521--526}, ADDRESS = {Dresden, Germany}, }
Endnote
%0 Conference Proceedings %A Bund, Johannes %A Lenzen, Christoph %A Medina, Moti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Optimal Metastability-containing Sorting Networks : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3F69-4 %R 10.23919/DATE.2018.8342063 %D 2018 %B Design, Automation & Test in Europe Conference & Exhibition %Z date of event: 2018-03-19 - 2018-03-23 %C Dresden, Germany %B Proceedings of the 2018 Design, Automation & Test in Europe %P 521 - 526 %I IEEE %@ 978-3-9819263-1-6
[59]
J. Bund, C. Lenzen, and M. Medina, “Optimal Metastability-Containing Sorting Networks,” 2018. [Online]. Available: http://arxiv.org/abs/1801.07549. (arXiv: 1801.07549)
Abstract
When setup/hold times of bistable elements are violated, they may become metastable, i.e., enter a transient state that is neither digital 0 nor 1. In general, metastability cannot be avoided, a problem that manifests whenever taking discrete measurements of analog values. Metastability of the output then reflects uncertainty as to whether a measurement should be rounded up or down to the next possible measurement outcome. Surprisingly, Lenzen and Medina (ASYNC 2016) showed that metastability can be contained, i.e., measurement values can be correctly sorted without resolving metastability first. However, both their work and the state of the art by Bund et al. (DATE 2017) leave open whether such a solution can be as small and fast as standard sorting networks. We show that this is indeed possible, by providing a circuit that sorts Gray code inputs (possibly containing a metastable bit) and has asymptotically optimal depth and size. Concretely, for 10-channel sorting networks and 16-bit wide inputs, we improve by 48.46% in delay and by 71.58% in area over Bund et al. Our simulations indicate that straightforward transistor-level optimization is likely to result in performance on par with standard (non-containing) solutions.
Export
BibTeX
@online{Bund_arXiv1801.07549, TITLE = {Optimal Metastability-Containing Sorting Networks}, AUTHOR = {Bund, Johannes and Lenzen, Christoph and Medina, Moti}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1801.07549}, EPRINT = {1801.07549}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {When setup/hold times of bistable elements are violated, they may become metastable, i.e., enter a transient state that is neither digital 0 nor 1. In general, metastability cannot be avoided, a problem that manifests whenever taking discrete measurements of analog values. Metastability of the output then reflects uncertainty as to whether a measurement should be rounded up or down to the next possible measurement outcome. Surprisingly, Lenzen and Medina (ASYNC 2016) showed that metastability can be contained, i.e., measurement values can be correctly sorted without resolving metastability first. However, both their work and the state of the art by Bund et al. (DATE 2017) leave open whether such a solution can be as small and fast as standard sorting networks. We show that this is indeed possible, by providing a circuit that sorts Gray code inputs (possibly containing a metastable bit) and has asymptotically optimal depth and size. Concretely, for 10-channel sorting networks and 16-bit wide inputs, we improve by 48.46% in delay and by 71.58% in area over Bund et al. Our simulations indicate that straightforward transistor-level optimization is likely to result in performance on par with standard (non-containing) solutions.}, }
Endnote
%0 Report %A Bund, Johannes %A Lenzen, Christoph %A Medina, Moti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Optimal Metastability-Containing Sorting Networks : %G eng %U http://hdl.handle.net/21.11116/0000-0002-1801-2 %U http://arxiv.org/abs/1801.07549 %D 2018 %X When setup/hold times of bistable elements are violated, they may become metastable, i.e., enter a transient state that is neither digital 0 nor 1. In general, metastability cannot be avoided, a problem that manifests whenever taking discrete measurements of analog values. Metastability of the output then reflects uncertainty as to whether a measurement should be rounded up or down to the next possible measurement outcome. Surprisingly, Lenzen and Medina (ASYNC 2016) showed that metastability can be contained, i.e., measurement values can be correctly sorted without resolving metastability first. However, both their work and the state of the art by Bund et al. (DATE 2017) leave open whether such a solution can be as small and fast as standard sorting networks. We show that this is indeed possible, by providing a circuit that sorts Gray code inputs (possibly containing a metastable bit) and has asymptotically optimal depth and size. Concretely, for 10-channel sorting networks and 16-bit wide inputs, we improve by 48.46% in delay and by 71.58% in area over Bund et al. Our simulations indicate that straightforward transistor-level optimization is likely to result in performance on par with standard (non-containing) solutions. %K Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC
[60]
J. Bund, C. Lenzen, and M. Medina, “Small Hazard-free Transducers,” 2018. [Online]. Available: http://arxiv.org/abs/1811.12369. (arXiv: 1811.12369)
Abstract
Recently, an unconditional exponential separation between the hazard-free complexity and (standard) circuit complexity of explicit functions has been shown. This raises the question: which classes of functions permit efficient hazard-free circuits? Our main result is as follows. A \emph{transducer} is a finite state machine that transcribes, symbol by symbol, an input string of length $n$ into an output string of length $n$. We prove that any function arising from a transducer with $s$ states, that is input symbols which are encoded by $\ell$ bits, has a hazard-free circuit of size $2^{\BO(s+\ell)}\cdot n$ and depth $\BO(\ell+ s\cdot \log n)$; in particular, if $s, \ell\in \BO(1)$, size and depth are asymptotically optimal. We utilize our main result to derive efficient circuits for \emph{$k$-recoverable addition}. Informally speaking, a code is \emph{$k$-recoverable} if it does not increase uncertainty regarding the encoded value, so long as it is guaranteed that it is from $\{x,x+1,\ldots,x+k\}$ for some $x\in \NN_0$. We provide an asymptotically optimal $k$-recoverable code. We also realize a transducer with $\BO(k)$ states that adds two codewords from this $k$-recoverable code. Combined with our main result, we obtain a hazard-free adder circuit of size $2^{\BO(k)}n$ and depth $\BO(k\log n)$ with respect to this code, i.e., a $k$-recoverable adder circuit that adds two codewords of $n$ bits each. In other words, $k$-recoverable addition is fixed-parameter tractable with respect to $k$.
Export
BibTeX
@online{Bund_arXiv1811.12369, TITLE = {Small Hazard-free Transducers}, AUTHOR = {Bund, Johannes and Lenzen, Christoph and Medina, Moti}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1811.12369}, EPRINT = {1811.12369}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Recently, an unconditional exponential separation between the hazard-free complexity and (standard) circuit complexity of explicit functions has been shown. This raises the question: which classes of functions permit efficient hazard-free circuits? Our main result is as follows. A \emph{transducer} is a finite state machine that transcribes, symbol by symbol, an input string of length $n$ into an output string of length $n$. We prove that any function arising from a transducer with $s$ states, that is input symbols which are encoded by $\ell$ bits, has a hazard-free circuit of size $2^{\BO(s+\ell)}\cdot n$ and depth $\BO(\ell+ s\cdot \log n)$; in particular, if $s, \ell\in \BO(1)$, size and depth are asymptotically optimal. We utilize our main result to derive efficient circuits for \emph{$k$-recoverable addition}. Informally speaking, a code is \emph{$k$-recoverable} if it does not increase uncertainty regarding the encoded value, so long as it is guaranteed that it is from $\{x,x+1,\ldots,x+k\}$ for some $x\in \NN_0$. We provide an asymptotically optimal $k$-recoverable code. We also realize a transducer with $\BO(k)$ states that adds two codewords from this $k$-recoverable code. Combined with our main result, we obtain a hazard-free adder circuit of size $2^{\BO(k)}n$ and depth $\BO(k\log n)$ with respect to this code, i.e., a $k$-recoverable adder circuit that adds two codewords of $n$ bits each. In other words, $k$-recoverable addition is fixed-parameter tractable with respect to $k$.}, }
Endnote
%0 Report %A Bund, Johannes %A Lenzen, Christoph %A Medina, Moti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Small Hazard-free Transducers : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9FAD-9 %U http://arxiv.org/abs/1811.12369 %D 2018 %X Recently, an unconditional exponential separation between the hazard-free complexity and (standard) circuit complexity of explicit functions has been shown. This raises the question: which classes of functions permit efficient hazard-free circuits? Our main result is as follows. A \emph{transducer} is a finite state machine that transcribes, symbol by symbol, an input string of length $n$ into an output string of length $n$. We prove that any function arising from a transducer with $s$ states, that is input symbols which are encoded by $\ell$ bits, has a hazard-free circuit of size $2^{\BO(s+\ell)}\cdot n$ and depth $\BO(\ell+ s\cdot \log n)$; in particular, if $s, \ell\in \BO(1)$, size and depth are asymptotically optimal. We utilize our main result to derive efficient circuits for \emph{$k$-recoverable addition}. Informally speaking, a code is \emph{$k$-recoverable} if it does not increase uncertainty regarding the encoded value, so long as it is guaranteed that it is from $\{x,x+1,\ldots,x+k\}$ for some $x\in \NN_0$. We provide an asymptotically optimal $k$-recoverable code. We also realize a transducer with $\BO(k)$ states that adds two codewords from this $k$-recoverable code. Combined with our main result, we obtain a hazard-free adder circuit of size $2^{\BO(k)}n$ and depth $\BO(k\log n)$ with respect to this code, i.e., a $k$-recoverable adder circuit that adds two codewords of $n$ bits each. In other words, $k$-recoverable addition is fixed-parameter tractable with respect to $k$. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Computational Complexity, cs.CC
[61]
P. Chalermsook, M. Goswami, L. Kozma, K. Mehlhorn, and T. Saranurak, “Multi-Finger Binary Search Trees,” in 29th International Symposium on Algorithms and Computation (ISAAC 2018), Jiaoxi, Yilan, Taiwan, 2018.
Export
BibTeX
@inproceedings{Chalermsook_ISAAC2018b, TITLE = {Multi-Finger Binary Search Trees}, AUTHOR = {Chalermsook, Parinya and Goswami, Mayank and Kozma, L{\`a}sz{\`o} and Mehlhorn, Kurt and Saranurak, Thatchaphol}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-094-1}, URL = {urn:nbn:de:0030-drops-100032}, DOI = {10.4230/LIPIcs.ISAAC.2018.55}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {29th International Symposium on Algorithms and Computation (ISAAC 2018)}, EDITOR = {Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou}, PAGES = {1--26}, EID = {55}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {123}, ADDRESS = {Jiaoxi, Yilan, Taiwan}, }
Endnote
%0 Conference Proceedings %A Chalermsook, Parinya %A Goswami, Mayank %A Kozma, L&#224;sz&#242; %A Mehlhorn, Kurt %A Saranurak, Thatchaphol %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Multi-Finger Binary Search Trees : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AADE-5 %R 10.4230/LIPIcs.ISAAC.2018.55 %U urn:nbn:de:0030-drops-100032 %D 2018 %B 29th International Symposium on Algorithms and Computation %Z date of event: 2018-12-16 - 2018-12-19 %C Jiaoxi, Yilan, Taiwan %B 29th International Symposium on Algorithms and Computation %E Hsu, Wen-Lian; Lee, Der-Tsai; Liao, Chung-Shou %P 1 - 26 %Z sequence number: 55 %I Schloss Dagstuhl %@ 978-3-95977-094-1 %B Leibniz International Proceedings in Informatics %N 123 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/10003/http://drops.dagstuhl.de/doku/urheberrecht1.html
[62]
P. Chalermsook, S. Das, G. Even, B. Laekhanukit, and D. Vaz, “Survivable Network Design for Group Connectivity in Low-Treewidth Graphs,” 2018. [Online]. Available: http://arxiv.org/abs/1802.10403. (arXiv: 1802.10403)
Abstract
In the Group Steiner Tree problem (GST), we are given a (vertex or edge)-weighted graph $G=(V,E)$ on $n$ vertices, a root vertex $r$ and a collection of groups $\{S_i\}_{i\in[h]}: S_i\subseteq V(G)$. The goal is to find a min-cost subgraph $H$ that connects the root to every group. We consider a fault-tolerant variant of GST, which we call Restricted (Rooted) Group SNDP. In this setting, each group $S_i$ has a demand $k_i\in[k],k\in\mathbb N$, and we wish to find a min-cost $H\subseteq G$ such that, for each group $S_i$, there is a vertex in $S_i$ connected to the root via $k_i$ (vertex or edge) disjoint paths. While GST admits $O(\log^2 n\log h)$ approximation, its high connectivity variants are Label-Cover hard, and for the vertex-weighted version, the hardness holds even when $k=2$. Previously, positive results were known only for the edge-weighted version when $k=2$ [Gupta et al., SODA 2010; Khandekar et al., Theor. Comput. Sci., 2012] and for a relaxed variant where the disjoint paths may end at different vertices in a group [Chalermsook et al., SODA 2015]. Our main result is an $O(\log n\log h)$ approximation for Restricted Group SNDP that runs in time $n^{f(k, w)}$, where $w$ is the treewidth of $G$. This nearly matches the lower bound when $k$ and $w$ are constant. The key to achieving this result is a non-trivial extension of the framework in [Chalermsook et al., SODA 2017], which embeds all feasible solutions to the problem into a dynamic program (DP) table. However, finding the optimal solution in the DP table remains intractable. We formulate a linear program relaxation for the DP and obtain an approximate solution via randomized rounding. This framework also allows us to systematically construct DP tables for high-connectivity problems. As a result, we present new exact algorithms for several variants of survivable network design problems in low-treewidth graphs.
Export
BibTeX
@online{Chalermsook_arXiv1802.10403, TITLE = {Survivable Network Design for Group Connectivity in Low-Treewidth Graphs}, AUTHOR = {Chalermsook, Parinya and Das, Syamantak and Even, Guy and Laekhanukit, Bundit and Vaz, Daniel}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1802.10403}, EPRINT = {1802.10403}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In the Group Steiner Tree problem (GST), we are given a (vertex or edge)-weighted graph $G=(V,E)$ on $n$ vertices, a root vertex $r$ and a collection of groups $\{S_i\}_{i\in[h]}: S_i\subseteq V(G)$. The goal is to find a min-cost subgraph $H$ that connects the root to every group. We consider a fault-tolerant variant of GST, which we call Restricted (Rooted) Group SNDP. In this setting, each group $S_i$ has a demand $k_i\in[k],k\in\mathbb N$, and we wish to find a min-cost $H\subseteq G$ such that, for each group $S_i$, there is a vertex in $S_i$ connected to the root via $k_i$ (vertex or edge) disjoint paths. While GST admits $O(\log^2 n\log h)$ approximation, its high connectivity variants are Label-Cover hard, and for the vertex-weighted version, the hardness holds even when $k=2$. Previously, positive results were known only for the edge-weighted version when $k=2$ [Gupta et al., SODA 2010; Khandekar et al., Theor. Comput. Sci., 2012] and for a relaxed variant where the disjoint paths may end at different vertices in a group [Chalermsook et al., SODA 2015]. Our main result is an $O(\log n\log h)$ approximation for Restricted Group SNDP that runs in time $n^{f(k, w)}$, where $w$ is the treewidth of $G$. This nearly matches the lower bound when $k$ and $w$ are constant. The key to achieving this result is a non-trivial extension of the framework in [Chalermsook et al., SODA 2017], which embeds all feasible solutions to the problem into a dynamic program (DP) table. However, finding the optimal solution in the DP table remains intractable. We formulate a linear program relaxation for the DP and obtain an approximate solution via randomized rounding. This framework also allows us to systematically construct DP tables for high-connectivity problems. As a result, we present new exact algorithms for several variants of survivable network design problems in low-treewidth graphs.}, }
Endnote
%0 Report %A Chalermsook, Parinya %A Das, Syamantak %A Even, Guy %A Laekhanukit, Bundit %A Vaz, Daniel %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Survivable Network Design for Group Connectivity in Low-Treewidth Graphs : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A84E-A %U http://arxiv.org/abs/1802.10403 %D 2018 %X In the Group Steiner Tree problem (GST), we are given a (vertex or edge)-weighted graph $G=(V,E)$ on $n$ vertices, a root vertex $r$ and a collection of groups $\{S_i\}_{i\in[h]}: S_i\subseteq V(G)$. The goal is to find a min-cost subgraph $H$ that connects the root to every group. We consider a fault-tolerant variant of GST, which we call Restricted (Rooted) Group SNDP. In this setting, each group $S_i$ has a demand $k_i\in[k],k\in\mathbb N$, and we wish to find a min-cost $H\subseteq G$ such that, for each group $S_i$, there is a vertex in $S_i$ connected to the root via $k_i$ (vertex or edge) disjoint paths. While GST admits $O(\log^2 n\log h)$ approximation, its high connectivity variants are Label-Cover hard, and for the vertex-weighted version, the hardness holds even when $k=2$. Previously, positive results were known only for the edge-weighted version when $k=2$ [Gupta et al., SODA 2010; Khandekar et al., Theor. Comput. Sci., 2012] and for a relaxed variant where the disjoint paths may end at different vertices in a group [Chalermsook et al., SODA 2015]. Our main result is an $O(\log n\log h)$ approximation for Restricted Group SNDP that runs in time $n^{f(k, w)}$, where $w$ is the treewidth of $G$. This nearly matches the lower bound when $k$ and $w$ are constant. The key to achieving this result is a non-trivial extension of the framework in [Chalermsook et al., SODA 2017], which embeds all feasible solutions to the problem into a dynamic program (DP) table. However, finding the optimal solution in the DP table remains intractable. We formulate a linear program relaxation for the DP and obtain an approximate solution via randomized rounding. This framework also allows us to systematically construct DP tables for high-connectivity problems. As a result, we present new exact algorithms for several variants of survivable network design problems in low-treewidth graphs. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Discrete Mathematics, cs.DM
[63]
P. Chalermsook, S. Das, G. Even, B. Laekhanukit, and D. Vaz, “Survivable Network Design for Group Connectivity in Low-Treewidth Graphs,” in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018), Princeton, NJ, USA, 2018.
Export
BibTeX
@inproceedings{Chalermsook_APPROXRANDOM18, TITLE = {Survivable Network Design for Group Connectivity in Low-Treewidth Graphs}, AUTHOR = {Chalermsook, Parinya and Das, Syamantak and Even, Guy and Laekhanukit, Bundit and Vaz, Daniel}, LANGUAGE = {eng}, ISBN = {978-3-95977-085-9}, URL = {urn:nbn:de:0030-drops-94127}, DOI = {10.4230/LIPIcs.APPROX-RANDOM.2018.8}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)}, EDITOR = {Blais, Eric and Jansen, Klaus and Rolim, Jos{\'e} D. P. and Steurer, David}, PAGES = {1--19}, EID = {8}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {116}, ADDRESS = {Princeton, NJ, USA}, }
Endnote
%0 Conference Proceedings %A Chalermsook, Parinya %A Das, Syamantak %A Even, Guy %A Laekhanukit, Bundit %A Vaz, Daniel %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Survivable Network Design for Group Connectivity in Low-Treewidth Graphs : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A832-8 %R 10.4230/LIPIcs.APPROX-RANDOM.2018.8 %U urn:nbn:de:0030-drops-94127 %D 2018 %B 21st International Workshop on Approximation Algorithms for Combinatorial Optimization Problems / 22nd International Workshop on Randomization and Computation %Z date of event: 2018-08-20 - 2018-08-22 %C Princeton, NJ, USA %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %E Blais, Eric; Jansen, Klaus; Rolim, Jos&#233; D. P.; Steurer, David %P 1 - 19 %Z sequence number: 8 %I Schloss Dagstuhl %@ 978-3-95977-085-9 %B Leibniz International Proceedings in Informatics %N 116 %U http://drops.dagstuhl.de/opus/volltexte/2018/9412/http://drops.dagstuhl.de/doku/urheberrecht1.html
[64]
L. S. Chandran, A. Das, D. Issac, and E. J. van Leeuwen, “Algorithms and Bounds for Very Strong Rainbow Coloring,” in LATIN 2018: Theoretical Informatics, Buenos Aires, Argentinia, 2018.
Export
BibTeX
@inproceedings{Chandran_LATIN2018, TITLE = {Algorithms and Bounds for Very Strong Rainbow Coloring}, AUTHOR = {Chandran, L. Sunil and Das, Anita and Issac, Davis and van Leeuwen, Erik Jan}, LANGUAGE = {eng}, ISBN = {978-3-319-77403-9}, DOI = {10.1007/978-3-319-77404-6_46}, PUBLISHER = {Springer}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {LATIN 2018: Theoretical Informatics}, EDITOR = {Bender, Michael A. and Farach-Colton, Mart{\'i}n and Mosteiro, Miguel A.}, PAGES = {625--639}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10807}, ADDRESS = {Buenos Aires, Argentinia}, }
Endnote
%0 Conference Proceedings %A Chandran, L. Sunil %A Das, Anita %A Issac, Davis %A van Leeuwen, Erik Jan %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Algorithms and Bounds for Very Strong Rainbow Coloring : %G eng %U http://hdl.handle.net/21.11116/0000-0002-576A-6 %R 10.1007/978-3-319-77404-6_46 %D 2018 %B 13th Latin American Theoretical Informatics Symposium %Z date of event: 2018-04-16 - 2018-04-19 %C Buenos Aires, Argentinia %B LATIN 2018: Theoretical Informatics %E Bender, Michael A.; Farach-Colton, Mart&#237;n; Mosteiro, Miguel A. %P 625 - 639 %I Springer %@ 978-3-319-77403-9 %B Lecture Notes in Computer Science %N 10807
[65]
L. S. Chandran, Y. K. Cheung, and D. Issac, “Spanning Tree Congestion and Computation of Generalized Györi-Lovász Partition,” in 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), Prague, Czech Republic, 2018.
Export
BibTeX
@inproceedings{stc-gyo-lov-2018-chandran, TITLE = {Spanning Tree Congestion and Computation of Generalized {Gy{\"o}ri-Lov{\'a}sz} Partition}, AUTHOR = {Chandran, L. Sunil and Cheung, Yun Kuen and Issac, Davis}, LANGUAGE = {eng}, ISBN = {978-3-95977-076-7}, URL = {urn:nbn:de:0030-drops-90361}, DOI = {10.4230/LIPIcs.ICALP.2018.32}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, EDITOR = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D{\'a}niel and Sannella, Donald}, PAGES = {1--14}, EID = {32}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {107}, ADDRESS = {Prague, Czech Republic}, }
Endnote
%0 Conference Proceedings %A Chandran, L. Sunil %A Cheung, Yun Kuen %A Issac, Davis %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Spanning Tree Congestion and Computation of Generalized Gy&#246;ri-Lov&#225;sz Partition : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E67-9 %R 10.4230/LIPIcs.ICALP.2018.32 %U urn:nbn:de:0030-drops-90361 %D 2018 %B 45th International Colloquium on Automata, Languages, and Programming %Z date of event: 2018-07-09 - 2018-07-13 %C Prague, Czech Republic %B 45th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Kaklamanis, Christos; Marx, D&#225;niel; Sannella, Donald %P 1 - 14 %Z sequence number: 32 %I Schloss Dagstuhl %@ 978-3-95977-076-7 %B Leibniz International Proceedings in Informatics %N 107 %U http://drops.dagstuhl.de/opus/volltexte/2018/9036/http://drops.dagstuhl.de/doku/urheberrecht1.html
[66]
T. M. Chan, T. C. van Dijk, K. Fleszar, J. Spoerhase, and A. Wolff, “Stabbing Rectangles by Line Segments - How Decomposition Reduces the Shallow-Cell Complexity,” in 29th International Symposium on Algorithms and Computation (ISAAC 2018), Jiaoxi, Yilan, Taiwan, 2018.
Export
BibTeX
@inproceedings{Chan_ISAAC2018b, TITLE = {Stabbing Rectangles by Line Segments -- How Decomposition Reduces the Shallow-Cell Complexity}, AUTHOR = {Chan, Timothy M. and van Dijk, Thomas C. and Fleszar, Krzysztof and Spoerhase, Joachim and Wolff, Alexander}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-094-1}, URL = {urn:nbn:de:0030-drops-100094}, DOI = {10.4230/LIPIcs.ISAAC.2018.61}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {29th International Symposium on Algorithms and Computation (ISAAC 2018)}, EDITOR = {Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou}, PAGES = {1--13}, EID = {61}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {123}, ADDRESS = {Jiaoxi, Yilan, Taiwan}, }
Endnote
%0 Conference Proceedings %A Chan, Timothy M. %A van Dijk, Thomas C. %A Fleszar, Krzysztof %A Spoerhase, Joachim %A Wolff, Alexander %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Stabbing Rectangles by Line Segments - How Decomposition Reduces the Shallow-Cell Complexity : %G eng %U http://hdl.handle.net/21.11116/0000-0002-ADEA-4 %R 10.4230/LIPIcs.ISAAC.2018.61 %U urn:nbn:de:0030-drops-100094 %D 2018 %B 29th International Symposium on Algorithms and Computation %Z date of event: 2018-12-16 - 2018-12-19 %C Jiaoxi, Yilan, Taiwan %B 29th International Symposium on Algorithms and Computation %E Hsu, Wen-Lian; Lee, Der-Tsai; Liao, Chung-Shou %P 1 - 13 %Z sequence number: 61 %I Schloss Dagstuhl %@ 978-3-95977-094-1 %B Leibniz International Proceedings in Informatics %N 123 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/10009/http://drops.dagstuhl.de/doku/urheberrecht1.html
[67]
N. Chen, M. Hoefer, M. Künnemann, C. Lin, and P. Miao, “Secretary Markets with Local Information,” Distributed Computing. (Accepted/in press)
Export
BibTeX
@article{Chen2018, TITLE = {Secretary Markets with Local Information}, AUTHOR = {Chen, Ning and Hoefer, Martin and K{\"u}nnemann, Marvin and Lin, Chengyu and Miao, Peihan}, LANGUAGE = {eng}, ISSN = {0178-2770}, DOI = {10.1007/s00446-018-0327-5}, PUBLISHER = {Springer International}, ADDRESS = {Berlin}, YEAR = {2018}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {Distributed Computing}, }
Endnote
%0 Journal Article %A Chen, Ning %A Hoefer, Martin %A K&#252;nnemann, Marvin %A Lin, Chengyu %A Miao, Peihan %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Secretary Markets with Local Information : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A90C-3 %R 10.1007/s00446-018-0327-5 %D 2018 %J Distributed Computing %I Springer International %C Berlin %@ false
[68]
Y. K. Cheung, M. Hoefer, and P. Nakhe, “Tracing Equilibrium in Dynamic Markets via Distributed Adaptation,” 2018. [Online]. Available: http://arxiv.org/abs/1804.08017. (arXiv: 1804.08017)
Abstract
Competitive equilibrium is a central concept in economics with numerous applications beyond markets, such as scheduling, fair allocation of goods, or bandwidth distribution in networks. Computation of competitive equilibria has received a significant amount of interest in algorithmic game theory, mainly for the prominent case of Fisher markets. Natural and decentralized processes like tatonnement and proportional response dynamics (PRD) converge quickly towards equilibrium in large classes of Fisher markets. Almost all of the literature assumes that the market is a static environment and that the parameters of agents and goods do not change over time. In contrast, many large real-world markets are subject to frequent and dynamic changes. In this paper, we provide the first provable performance guarantees of discrete-time tatonnement and PRD in markets that are subject to perturbation over time. We analyze the prominent class of Fisher markets with CES utilities and quantify the impact of changes in supplies of goods, budgets of agents, and utility functions of agents on the convergence of tatonnement to market equilibrium. Since the equilibrium becomes a dynamic object and will rarely be reached, our analysis provides bounds expressing the distance to equilibrium that will be maintained via tatonnement and PRD updates. Our results indicate that in many cases, tatonnement and PRD follow the equilibrium rather closely and quickly recover conditions of approximate market clearing. Our approach can be generalized to analyzing a general class of Lyapunov dynamical systems with changing system parameters, which might be of independent interest.
Export
BibTeX
@online{Cheung_arXiv1804.08017, TITLE = {Tracing Equilibrium in Dynamic Markets via Distributed Adaptation}, AUTHOR = {Cheung, Yun Kuen and Hoefer, Martin and Nakhe, Paresh}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1804.08017}, EPRINT = {1804.08017}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Competitive equilibrium is a central concept in economics with numerous applications beyond markets, such as scheduling, fair allocation of goods, or bandwidth distribution in networks. Computation of competitive equilibria has received a significant amount of interest in algorithmic game theory, mainly for the prominent case of Fisher markets. Natural and decentralized processes like tatonnement and proportional response dynamics (PRD) converge quickly towards equilibrium in large classes of Fisher markets. Almost all of the literature assumes that the market is a static environment and that the parameters of agents and goods do not change over time. In contrast, many large real-world markets are subject to frequent and dynamic changes. In this paper, we provide the first provable performance guarantees of discrete-time tatonnement and PRD in markets that are subject to perturbation over time. We analyze the prominent class of Fisher markets with CES utilities and quantify the impact of changes in supplies of goods, budgets of agents, and utility functions of agents on the convergence of tatonnement to market equilibrium. Since the equilibrium becomes a dynamic object and will rarely be reached, our analysis provides bounds expressing the distance to equilibrium that will be maintained via tatonnement and PRD updates. Our results indicate that in many cases, tatonnement and PRD follow the equilibrium rather closely and quickly recover conditions of approximate market clearing. Our approach can be generalized to analyzing a general class of Lyapunov dynamical systems with changing system parameters, which might be of independent interest.}, }
Endnote
%0 Report %A Cheung, Yun Kuen %A Hoefer, Martin %A Nakhe, Paresh %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Tracing Equilibrium in Dynamic Markets via Distributed Adaptation : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AE08-2 %U http://arxiv.org/abs/1804.08017 %D 2018 %X Competitive equilibrium is a central concept in economics with numerous applications beyond markets, such as scheduling, fair allocation of goods, or bandwidth distribution in networks. Computation of competitive equilibria has received a significant amount of interest in algorithmic game theory, mainly for the prominent case of Fisher markets. Natural and decentralized processes like tatonnement and proportional response dynamics (PRD) converge quickly towards equilibrium in large classes of Fisher markets. Almost all of the literature assumes that the market is a static environment and that the parameters of agents and goods do not change over time. In contrast, many large real-world markets are subject to frequent and dynamic changes. In this paper, we provide the first provable performance guarantees of discrete-time tatonnement and PRD in markets that are subject to perturbation over time. We analyze the prominent class of Fisher markets with CES utilities and quantify the impact of changes in supplies of goods, budgets of agents, and utility functions of agents on the convergence of tatonnement to market equilibrium. Since the equilibrium becomes a dynamic object and will rarely be reached, our analysis provides bounds expressing the distance to equilibrium that will be maintained via tatonnement and PRD updates. Our results indicate that in many cases, tatonnement and PRD follow the equilibrium rather closely and quickly recover conditions of approximate market clearing. Our approach can be generalized to analyzing a general class of Lyapunov dynamical systems with changing system parameters, which might be of independent interest. %K Computer Science, Computer Science and Game Theory, cs.GT
[69]
Y. K. Cheung, R. Cole, and Y. Tao, “Parallel Stochastic Asynchronous Coordinate Descent: Tight Bounds on the Possible Parallelism,” 2018. [Online]. Available: http://arxiv.org/abs/1811.05087. (arXiv: 1811.05087)
Abstract
Several works have shown linear speedup is achieved by an asynchronous parallel implementation of stochastic coordinate descent so long as there is not too much parallelism. More specifically, it is known that if all updates are of similar duration, then linear speedup is possible with up to $\Theta(\sqrt n/L_{\mathsf{res}})$ processors, where $L_{\mathsf{res}}$ is a suitable Lipschitz parameter. This paper shows the bound is tight for essentially all possible values of $L_{\mathsf{res}}$.
Export
BibTeX
@online{corr/abs-1811-05087, TITLE = {Parallel Stochastic Asynchronous Coordinate Descent: {T}ight Bounds on the Possible Parallelism}, AUTHOR = {Cheung, Yun Kuen and Cole, Richard and Tao, Yixin}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1811.05087}, EPRINT = {1811.05087}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Several works have shown linear speedup is achieved by an asynchronous parallel implementation of stochastic coordinate descent so long as there is not too much parallelism. More specifically, it is known that if all updates are of similar duration, then linear speedup is possible with up to $\Theta(\sqrt n/L_{\mathsf{res}})$ processors, where $L_{\mathsf{res}}$ is a suitable Lipschitz parameter. This paper shows the bound is tight for essentially all possible values of $L_{\mathsf{res}}$.}, }
Endnote
%0 Report %A Cheung, Yun Kuen %A Cole, Richard %A Tao, Yixin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Parallel Stochastic Asynchronous Coordinate Descent: Tight Bounds on the Possible Parallelism : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AAF2-D %U http://arxiv.org/abs/1811.05087 %D 2018 %X Several works have shown linear speedup is achieved by an asynchronous parallel implementation of stochastic coordinate descent so long as there is not too much parallelism. More specifically, it is known that if all updates are of similar duration, then linear speedup is possible with up to $\Theta(\sqrt n/L_{\mathsf{res}})$ processors, where $L_{\mathsf{res}}$ is a suitable Lipschitz parameter. This paper shows the bound is tight for essentially all possible values of $L_{\mathsf{res}}$. %K Mathematics, Optimization and Control, math.OC,Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC
[70]
Y. K. Cheung, “Multiplicative Weights Updates with Constant Step-Size in Graphical Constant-Sum Games,” in Advances in Neural Information Processing Systems 31 (NIPS 2018), Montréal, Canada, 2018.
Export
BibTeX
@inproceedings{NeurIPS/Cheung18, TITLE = {Multiplicative Weights Updates with Constant Step-Size in Graphical Constant-Sum Games}, AUTHOR = {Cheung, Yun Kuen}, LANGUAGE = {eng}, PUBLISHER = {Curran Associates}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Advances in Neural Information Processing Systems 31 (NIPS 2018)}, EDITOR = {Bengio, S. and Wallach, H. and Larochelle, H. and Grauman, K. and Cesa-Bianchi, N. and Garnett, R.}, PAGES = {3532--3542}, ADDRESS = {Montr{\'e}al, Canada}, }
Endnote
%0 Conference Proceedings %A Cheung, Yun Kuen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Multiplicative Weights Updates with Constant Step-Size in Graphical Constant-Sum Games : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AB07-6 %D 2018 %B Thirty-second Conference on Neural Information Processing Systems %Z date of event: 2018-12-02 - 2018-12-08 %C Montr&#233;al, Canada %B Advances in Neural Information Processing Systems 31 %E Bengio, S.; Wallach, H.; Larochelle, H.; Grauman, K.; Cesa-Bianchi, N.; Garnett, R. %P 3532 - 3542 %I Curran Associates %U http://papers.nips.cc/paper/7612-multiplicative-weights-updates-with-constant-step-size-in-graphical-constant-sum-games.pdf
[71]
Y. K. Cheung, R. Cole, and Y. Tao, “Dynamics of Distributed Updating in Fisher Markets,” in ACM EC’18, Nineteenth ACM Conference on Economics and Computation, Ithaca, NY, USA, 2018.
Export
BibTeX
@inproceedings{EC/CCT18, TITLE = {Dynamics of Distributed Updating in {F}isher Markets}, AUTHOR = {Cheung, Yun Kuen and Cole, Richard and Tao, Yixin}, LANGUAGE = {eng}, ISBN = {978-1-4503-5829-3}, DOI = {10.1145/3219166.3219189}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {ACM EC'18, Nineteenth ACM Conference on Economics and Computation}, PAGES = {351--368}, ADDRESS = {Ithaca, NY, USA}, }
Endnote
%0 Conference Proceedings %A Cheung, Yun Kuen %A Cole, Richard %A Tao, Yixin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Dynamics of Distributed Updating in Fisher Markets : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AAE9-8 %R 10.1145/3219166.3219189 %D 2018 %B Nineteenth ACM Conference on Economics and Computation %Z date of event: 2018-06-18 - 2018-06-22 %C Ithaca, NY, USA %B ACM EC'18 %P 351 - 368 %I ACM %@ 978-1-4503-5829-3
[72]
Y. K. Cheung and R. Cole, “Amortized Analysis of Asynchronous Price Dynamics,” in 26th Annual European Symposium on Algorithms (ESA 2018), Helsinki, Finland, 2018.
Export
BibTeX
@inproceedings{Cheung_ESA2018, TITLE = {Amortized Analysis of Asynchronous Price Dynamics}, AUTHOR = {Cheung, Yun Kuen and Cole, Richard}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-081-1}, URL = {urn:nbn:de:0030-drops-94812}, DOI = {10.4230/LIPIcs.ESA.2018.18}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {26th Annual European Symposium on Algorithms (ESA 2018)}, EDITOR = {Azar, Yossi and Bast, Hannah and Herman, Grzegorz}, PAGES = {1--15}, EID = {18}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {112}, ADDRESS = {Helsinki, Finland}, }
Endnote
%0 Conference Proceedings %A Cheung, Yun Kuen %A Cole, Richard %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Amortized Analysis of Asynchronous Price Dynamics : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AAEE-3 %R 10.4230/LIPIcs.ESA.2018.18 %U urn:nbn:de:0030-drops-94812 %D 2018 %B 26th Annual European Symposium on Algorithms %Z date of event: 2018-08-20 - 2018-08-22 %C Helsinki, Finland %B 26th Annual European Symposium on Algorithms %E Azar, Yossi; Bast, Hannah; Herman, Grzegorz %P 1 - 15 %Z sequence number: 18 %I Schloss Dagstuhl %@ 978-3-95977-081-1 %B Leibniz International Proceedings in Informatics %N 112 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9481/http://drops.dagstuhl.de/doku/urheberrecht1.html
[73]
Y. K. Cheung, R. Cole, and Y. Tao, “(Near) Optimal Parallelism Bound for Fully Asynchronous Coordinate Descent with Linear Speedup,” 2018. [Online]. Available: http://arxiv.org/abs/1811.03254. (arXiv: 1811.03254)
Abstract
When solving massive optimization problems in areas such as machine learning, it is a common practice to seek speedup via massive parallelism. However, especially in an asynchronous environment, there are limits on the possible parallelism. Accordingly, we seek tight bounds on the viable parallelism in asynchronous implementations of coordinate descent. We focus on asynchronous coordinate descent (ACD) algorithms on convex functions $F:\mathbb{R}^n \rightarrow \mathbb{R}$ of the form $$F(x) = f(x) ~+~ \sum_{k=1}^n \Psi_k(x_k),$$ where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a smooth convex function, and each $\Psi_k:\mathbb{R} \rightarrow \mathbb{R}$ is a univariate and possibly non-smooth convex function. Our approach is to quantify the shortfall in progress compared to the standard sequential stochastic gradient descent. This leads to a truly simple yet optimal analysis of the standard stochastic ACD in a partially asynchronous environment, which already generalizes and improves on the bounds in prior work. We also give a considerably more involved analysis for general asynchronous environments in which the only constraint is that each update can overlap with at most $q$ others, where $q$ is at most the number of processors times the ratio in the lengths of the longest and shortest updates. The main technical challenge is to demonstrate linear speedup in the latter environment. This stems from the subtle interplay of asynchrony and randomization. This improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and we show that our new bound is almost optimal.
Export
BibTeX
@online{corr/abs-1811-03254, TITLE = {(Near) Optimal Parallelism Bound for Fully Asynchronous Coordinate Descent with Linear Speedup}, AUTHOR = {Cheung, Yun Kuen and Cole, Richard and Tao, Yixin}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1811.03254}, EPRINT = {1811.03254}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {When solving massive optimization problems in areas such as machine learning, it is a common practice to seek speedup via massive parallelism. However, especially in an asynchronous environment, there are limits on the possible parallelism. Accordingly, we seek tight bounds on the viable parallelism in asynchronous implementations of coordinate descent. We focus on asynchronous coordinate descent (ACD) algorithms on convex functions $F:\mathbb{R}^n \rightarrow \mathbb{R}$ of the form $$F(x) = f(x) ~+~ \sum_{k=1}^n \Psi_k(x_k),$$ where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a smooth convex function, and each $\Psi_k:\mathbb{R} \rightarrow \mathbb{R}$ is a univariate and possibly non-smooth convex function. Our approach is to quantify the shortfall in progress compared to the standard sequential stochastic gradient descent. This leads to a truly simple yet optimal analysis of the standard stochastic ACD in a partially asynchronous environment, which already generalizes and improves on the bounds in prior work. We also give a considerably more involved analysis for general asynchronous environments in which the only constraint is that each update can overlap with at most $q$ others, where $q$ is at most the number of processors times the ratio in the lengths of the longest and shortest updates. The main technical challenge is to demonstrate linear speedup in the latter environment. This stems from the subtle interplay of asynchrony and randomization. This improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and we show that our new bound is almost optimal.}, }
Endnote
%0 Report %A Cheung, Yun Kuen %A Cole, Richard %A Tao, Yixin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T (Near) Optimal Parallelism Bound for Fully Asynchronous Coordinate Descent with Linear Speedup : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AAF5-A %U http://arxiv.org/abs/1811.03254 %D 2018 %X When solving massive optimization problems in areas such as machine learning, it is a common practice to seek speedup via massive parallelism. However, especially in an asynchronous environment, there are limits on the possible parallelism. Accordingly, we seek tight bounds on the viable parallelism in asynchronous implementations of coordinate descent. We focus on asynchronous coordinate descent (ACD) algorithms on convex functions $F:\mathbb{R}^n \rightarrow \mathbb{R}$ of the form $$F(x) = f(x) ~+~ \sum_{k=1}^n \Psi_k(x_k),$$ where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a smooth convex function, and each $\Psi_k:\mathbb{R} \rightarrow \mathbb{R}$ is a univariate and possibly non-smooth convex function. Our approach is to quantify the shortfall in progress compared to the standard sequential stochastic gradient descent. This leads to a truly simple yet optimal analysis of the standard stochastic ACD in a partially asynchronous environment, which already generalizes and improves on the bounds in prior work. We also give a considerably more involved analysis for general asynchronous environments in which the only constraint is that each update can overlap with at most $q$ others, where $q$ is at most the number of processors times the ratio in the lengths of the longest and shortest updates. The main technical challenge is to demonstrate linear speedup in the latter environment. This stems from the subtle interplay of asynchrony and randomization. This improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and we show that our new bound is almost optimal. %K Mathematics, Optimization and Control, math.OC,Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC
[74]
Y. K. Cheung, “Steiner Point Removal - Distant Terminals Don’t (Really) Bother,” in SODA’18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 2018.
Export
BibTeX
@inproceedings{Cheung_SODA18, TITLE = {{S}teiner Point Removal -- Distant Terminals Don't (Really) Bother}, AUTHOR = {Cheung, Yun Kuen}, LANGUAGE = {eng}, ISBN = {978-1-61197-503-1}, DOI = {10.1137/1.9781611975031.89}, PUBLISHER = {SIAM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {SODA'18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms}, EDITOR = {Czumaj, Artur}, PAGES = {1353--1360}, ADDRESS = {New Orleans, LA, USA}, }
Endnote
%0 Conference Proceedings %A Cheung, Yun Kuen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Steiner Point Removal - Distant Terminals Don't (Really) Bother : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA8C-1 %R 10.1137/1.9781611975031.89 %D 2018 %B Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2018-01-07 - 2018-01-10 %C New Orleans, LA, USA %B SODA'18 %E Czumaj, Artur %P 1353 - 1360 %I SIAM %@ 978-1-61197-503-1
[75]
L. Chiantini, J. D. Hauenstein, C. Ikenmeyer, J. M. Landsberg, and G. Ottaviani, “Polynomials and the Exponent of Matrix Multiplication,” Bulletin of the London Mathematical Society, vol. 50, no. 3, 2018.
Export
BibTeX
@article{Chaintini2018, TITLE = {Polynomials and the Exponent of Matrix Multiplication}, AUTHOR = {Chiantini, Luca and Hauenstein, Jonathan D. and Ikenmeyer, Christian and Landsberg, Joseph M. and Ottaviani, Giorgio}, LANGUAGE = {eng}, ISSN = {0024-6093}, DOI = {10.1112/blms.12147}, PUBLISHER = {London Mathematical Society}, ADDRESS = {London}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Bulletin of the London Mathematical Society}, VOLUME = {50}, NUMBER = {3}, PAGES = {369--389}, }
Endnote
%0 Journal Article %A Chiantini, Luca %A Hauenstein, Jonathan D. %A Ikenmeyer, Christian %A Landsberg, Joseph M. %A Ottaviani, Giorgio %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Polynomials and the Exponent of Matrix Multiplication : %G eng %U http://hdl.handle.net/21.11116/0000-0001-88D0-A %R 10.1112/blms.12147 %7 2018 %D 2018 %J Bulletin of the London Mathematical Society %V 50 %N 3 %& 369 %P 369 - 389 %I London Mathematical Society %C London %@ false
[76]
G. Christodoulou and A. Sgouritsa, “Designing Networks with Good Equilibria under Uncertainty,” SIAM Journal on Computing. (Accepted/in press)
Export
BibTeX
@article{Christodoulou2019SICOMP, TITLE = {Designing Networks with Good Equilibria under Uncertainty}, AUTHOR = {Christodoulou, George and Sgouritsa, Alkmini}, LANGUAGE = {eng}, ISSN = {0097-5397}, PUBLISHER = {Society for Industrial and Applied Mathematics.}, ADDRESS = {Philadelphia}, YEAR = {2018}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {SIAM Journal on Computing}, }
Endnote
%0 Journal Article %A Christodoulou, George %A Sgouritsa, Alkmini %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Designing Networks with Good Equilibria under Uncertainty : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AEC7-A %D 2018 %J SIAM Journal on Computing %I Society for Industrial and Applied Mathematics. %C Philadelphia %@ false
[77]
A. Clementi, M. Ghaffari, L. Gualà, E. Natale, F. Pasquale, and G. Scornavacca, “A Tight Analysis of the Parallel Undecided-State Dynamics with Two Colors,” in 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018), Liverpool, UK, 2018.
Export
BibTeX
@inproceedings{Clementi_MFCS2018, TITLE = {A Tight Analysis of the Parallel Undecided-State Dynamics with Two Colors}, AUTHOR = {Clementi, Andrea and Ghaffari, Mohsen and Gual{\`a}, Luciano and Natale, Emanuele and Pasquale, Francesco and Scornavacca, Giacomo}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-086-6}, URL = {urn:nbn:de:0030-drops-96107}, DOI = {10.4230/LIPIcs.MFCS.2018.28}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)}, EDITOR = {Potapov, Igor and Spirakis, Paul and Worrell, James}, PAGES = {1--15}, EID = {28}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {117}, ADDRESS = {Liverpool, UK}, }
Endnote
%0 Conference Proceedings %A Clementi, Andrea %A Ghaffari, Mohsen %A Gual&#224;, Luciano %A Natale, Emanuele %A Pasquale, Francesco %A Scornavacca, Giacomo %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Tight Analysis of the Parallel Undecided-State Dynamics with Two Colors : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A96C-7 %R 10.4230/LIPIcs.MFCS.2018.28 %U urn:nbn:de:0030-drops-96107 %D 2018 %B 43rd International Symposium on Mathematical Foundations of Computer Science %Z date of event: 2018-08-27 - 2018-08-31 %C Liverpool, UK %B 43rd International Symposium on Mathematical Foundations of Computer Science %E Potapov, Igor; Spirakis, Paul; Worrell, James %P 1 - 15 %Z sequence number: 28 %I Schloss Dagstuhl %@ 978-3-95977-086-6 %B Leibniz International Proceedings in Informatics %N 117 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9610/http://drops.dagstuhl.de/doku/urheberrecht1.html
[78]
J. Correa, P. Foncea, R. Hoeksma, T. Oosterwijk, and T. Vredeveld, “Recent Developments in Prophet Inequalities,” ACM SIGecom Exchanges. (Accepted/in press)
Export
BibTeX
@article{Correa2018, TITLE = {Recent Developments in Prophet Inequalities}, AUTHOR = {Correa, Jos{\'e} and Foncea, Patricio and Hoeksma, Ruben and Oosterwijk, Tim and Vredeveld, Tjark}, LANGUAGE = {eng}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2018}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM SIGecom Exchanges}, }
Endnote
%0 Journal Article %A Correa, Jos&#233; %A Foncea, Patricio %A Hoeksma, Ruben %A Oosterwijk, Tim %A Vredeveld, Tjark %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Recent Developments in Prophet Inequalities : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E6F-1 %D 2018 %J ACM SIGecom Exchanges %I ACM %C New York, NY
[79]
C. Croitoru and K. Mehlhorn, “On Testing Substitutability,” Information Processing Letters, vol. 138, 2018.
Export
BibTeX
@article{Croitoru_2018, TITLE = {On Testing Substitutability}, AUTHOR = {Croitoru, Cosmina and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISSN = {0020-0190}, DOI = {10.1016/j.ipl.2018.05.006}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Information Processing Letters}, VOLUME = {138}, PAGES = {19--21}, }
Endnote
%0 Journal Article %A Croitoru, Cosmina %A Mehlhorn, Kurt %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Testing Substitutability : %G eng %U http://hdl.handle.net/21.11116/0000-0001-EE14-D %R 10.1016/j.ipl.2018.05.006 %7 2018 %D 2018 %J Information Processing Letters %V 138 %& 19 %P 19 - 21 %I Elsevier %C Amsterdam %@ false
[80]
C. Croitoru and K. Mehlhorn, “On Testing Substitutability,” 2018. [Online]. Available: http://arxiv.org/abs/1805.07642. (arXiv: 1805.07642)
Abstract
The papers~\cite{hatfimmokomi11} and~\cite{azizbrilharr13} propose algorithms for testing whether the choice function induced by a (strict) preference list of length $N$ over a universe $U$ is substitutable. The running time of these algorithms is $O(|U|^3\cdot N^3)$, respectively $O(|U|^2\cdot N^3)$. In this note we present an algorithm with running time $O(|U|^2\cdot N^2)$. Note that $N$ may be exponential in the size $|U|$ of the universe.
Export
BibTeX
@online{Croitoru_arXiv1805.07642, TITLE = {On Testing Substitutability}, AUTHOR = {Croitoru, Cosmina and Mehlhorn, Kurt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1805.07642}, EPRINT = {1805.07642}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The papers~\cite{hatfimmokomi11} and~\cite{azizbrilharr13} propose algorithms for testing whether the choice function induced by a (strict) preference list of length $N$ over a universe $U$ is substitutable. The running time of these algorithms is $O(|U|^3\cdot N^3)$, respectively $O(|U|^2\cdot N^3)$. In this note we present an algorithm with running time $O(|U|^2\cdot N^2)$. Note that $N$ may be exponential in the size $|U|$ of the universe.}, }
Endnote
%0 Report %A Croitoru, Cosmina %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Testing Substitutability : %G eng %U http://hdl.handle.net/21.11116/0000-0002-05FA-F %U http://arxiv.org/abs/1805.07642 %D 2018 %X The papers~\cite{hatfimmokomi11} and~\cite{azizbrilharr13} propose algorithms for testing whether the choice function induced by a (strict) preference list of length $N$ over a universe $U$ is substitutable. The running time of these algorithms is $O(|U|^3\cdot N^3)$, respectively $O(|U|^2\cdot N^3)$. In this note we present an algorithm with running time $O(|U|^2\cdot N^2)$. Note that $N$ may be exponential in the size $|U|$ of the universe. %K Computer Science, Data Structures and Algorithms, cs.DS,econ.EM
[81]
E. Cruciani, E. Natale, A. Nusser, and G. Scornavacca, “On the Emergent Behavior of the 2-Choices Dynamics,” in Proceedings of the 19th Italian Conference on Theoretical Computer Science (ICTCS 2018), Urbino, Italy, 2018.
Export
BibTeX
@inproceedings{Cruciano_ICTCS2018, TITLE = {On the Emergent Behavior of the 2-Choices Dynamics}, AUTHOR = {Cruciani, Emilio and Natale, Emanuele and Nusser, Andr{\'e} and Scornavacca, Giacomo}, LANGUAGE = {eng}, URL = {urn:nbn:de:0074-2243-4}, PUBLISHER = {CEUR-WS}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the 19th Italian Conference on Theoretical Computer Science (ICTCS 2018)}, EDITOR = {Aldini, Alessandro and Bernardo, Marco}, SERIES = {CEUR Workshop Proceedings}, VOLUME = {2243}, ADDRESS = {Urbino, Italy}, }
Endnote
%0 Conference Proceedings %A Cruciani, Emilio %A Natale, Emanuele %A Nusser, Andr&#233; %A Scornavacca, Giacomo %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On the Emergent Behavior of the 2-Choices Dynamics : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A44E-E %D 2018 %B 19th Italian Conference on Theoretical Computer Science %Z date of event: 2018-09-18 - 2018-09-20 %C Urbino, Italy %B Proceedings of the 19th Italian Conference on Theoretical Computer Science %E Aldini, Alessandro; Bernardo, Marco %I CEUR-WS %B CEUR Workshop Proceedings %N 2243 %U http://ceur-ws.org/Vol-2243/paper4.pdf
[82]
E. Cruciani, E. Natale, A. Nusser, and G. Scornavacca, “Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks,” 2018. [Online]. Available: http://arxiv.org/abs/1804.07223. (arXiv: 1804.07223)
Abstract
Consider the following process on a network: Each agent initially holds either opinion blue or red; then, in each round, each agent looks at two random neighbors and, if the two have the same opinion, the agent adopts it. This process is known as the 2-Choices dynamics and is arguably the most basic non-trivial opinion dynamics modeling voting behavior on social networks. Despite its apparent simplicity, 2-Choices has been analytically characterized only on networks with a strong expansion property -- under assumptions on the initial configuration that establish it as a fast majority consensus protocol. In this work, we aim at contributing to the understanding of the 2-Choices dynamics by considering its behavior on a class of networks with core-periphery structure, a well-known topological assumption in social networks. In a nutshell, assume that a densely-connected subset of agents, the core, holds a different opinion from the rest of the network, the periphery. Then, depending on the strength of the cut between the core and the periphery, a phase-transition phenomenon occurs: Either the core's opinion rapidly spreads among the rest of the network, or a metastability phase takes place, in which both opinions coexist in the network for superpolynomial time. The interest of our result is twofold. On the one hand, by looking at the 2-Choices dynamics as a simplistic model of competition among opinions in social networks, our theorem sheds light on the influence of the core on the rest of the network, as a function of the core's connectivity towards the latter. On the other hand, to the best of our knowledge, we provide the first analytical result which shows a heterogeneous behavior of a simple dynamics as a function of structural parameters of the network. Finally, we validate our theoretical predictions with extensive experiments on real networks.
Export
BibTeX
@online{Cruciano_arXiv1804.07223, TITLE = {Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks}, AUTHOR = {Cruciani, Emilio and Natale, Emanuele and Nusser, Andr{\'e} and Scornavacca, Giacomo}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1804.07223}, EPRINT = {1804.07223}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Consider the following process on a network: Each agent initially holds either opinion blue or red; then, in each round, each agent looks at two random neighbors and, if the two have the same opinion, the agent adopts it. This process is known as the 2-Choices dynamics and is arguably the most basic non-trivial opinion dynamics modeling voting behavior on social networks. Despite its apparent simplicity, 2-Choices has been analytically characterized only on networks with a strong expansion property -- under assumptions on the initial configuration that establish it as a fast majority consensus protocol. In this work, we aim at contributing to the understanding of the 2-Choices dynamics by considering its behavior on a class of networks with core-periphery structure, a well-known topological assumption in social networks. In a nutshell, assume that a densely-connected subset of agents, the core, holds a different opinion from the rest of the network, the periphery. Then, depending on the strength of the cut between the core and the periphery, a phase-transition phenomenon occurs: Either the core's opinion rapidly spreads among the rest of the network, or a metastability phase takes place, in which both opinions coexist in the network for superpolynomial time. The interest of our result is twofold. On the one hand, by looking at the 2-Choices dynamics as a simplistic model of competition among opinions in social networks, our theorem sheds light on the influence of the core on the rest of the network, as a function of the core's connectivity towards the latter. On the other hand, to the best of our knowledge, we provide the first analytical result which shows a heterogeneous behavior of a simple dynamics as a function of structural parameters of the network. Finally, we validate our theoretical predictions with extensive experiments on real networks.}, }
Endnote
%0 Report %A Cruciani, Emilio %A Natale, Emanuele %A Nusser, Andr&#233; %A Scornavacca, Giacomo %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A446-6 %U http://arxiv.org/abs/1804.07223 %D 2018 %X Consider the following process on a network: Each agent initially holds either opinion blue or red; then, in each round, each agent looks at two random neighbors and, if the two have the same opinion, the agent adopts it. This process is known as the 2-Choices dynamics and is arguably the most basic non-trivial opinion dynamics modeling voting behavior on social networks. Despite its apparent simplicity, 2-Choices has been analytically characterized only on networks with a strong expansion property -- under assumptions on the initial configuration that establish it as a fast majority consensus protocol. In this work, we aim at contributing to the understanding of the 2-Choices dynamics by considering its behavior on a class of networks with core-periphery structure, a well-known topological assumption in social networks. In a nutshell, assume that a densely-connected subset of agents, the core, holds a different opinion from the rest of the network, the periphery. Then, depending on the strength of the cut between the core and the periphery, a phase-transition phenomenon occurs: Either the core's opinion rapidly spreads among the rest of the network, or a metastability phase takes place, in which both opinions coexist in the network for superpolynomial time. The interest of our result is twofold. On the one hand, by looking at the 2-Choices dynamics as a simplistic model of competition among opinions in social networks, our theorem sheds light on the influence of the core on the rest of the network, as a function of the core's connectivity towards the latter. On the other hand, to the best of our knowledge, we provide the first analytical result which shows a heterogeneous behavior of a simple dynamics as a function of structural parameters of the network. Finally, we validate our theoretical predictions with extensive experiments on real networks. %K cs.SI, Physics, Physics and Society, physics.soc-ph
[83]
E. Cruciani, E. Natale, and G. Scornavacca, “On the Metastability of Quadratic Majority Dynamics on Clustered Graphs and its Biological Implications,” Bulletin of the EATCS, vol. 125, 2018.
Export
BibTeX
@article{Cruciani_EATCS2018b, TITLE = {On the Metastability of Quadratic Majority Dynamics on Clustered Graphs and its Biological Implications}, AUTHOR = {Cruciani, Emilio and Natale, Emanuele and Scornavacca, Giacomo}, LANGUAGE = {eng}, ISSN = {0252-9742}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Bulletin of the EATCS}, VOLUME = {125}, EID = {535}, }
Endnote
%0 Journal Article %A Cruciani, Emilio %A Natale, Emanuele %A Scornavacca, Giacomo %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On the Metastability of Quadratic Majority Dynamics on Clustered Graphs and its Biological Implications : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A94B-C %7 2018 %D 2018 %J Bulletin of the EATCS %O EATCS %V 125 %Z sequence number: 535 %@ false
[84]
E. Cruciani, E. Natale, A. Nusser, and G. Scornavacca, “Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks,” in AAMAS’18, 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden, 2018.
Export
BibTeX
@inproceedings{Cruciani_AAMAS2018, TITLE = {Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks}, AUTHOR = {Cruciani, Emilio and Natale, Emanuele and Nusser, Andr{\'e} and Scornavacca, Giacomo}, LANGUAGE = {eng}, ISBN = {978-1-4503-5649-7}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {AAMAS'18, 17th International Conference on Autonomous Agents and MultiAgent Systems}, PAGES = {777--785}, ADDRESS = {Stockholm, Sweden}, }
Endnote
%0 Conference Proceedings %A Cruciani, Emilio %A Natale, Emanuele %A Nusser, Andr&#233; %A Scornavacca, Giacomo %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A47E-8 %D 2018 %B 17th International Conference on Autonomous Agents and MultiAgent Systems %Z date of event: 2018-07-10 - 2018-07-15 %C Stockholm, Sweden %B AAMAS'18 %P 777 - 785 %I ACM %@ 978-1-4503-5649-7
[85]
E. Cruciani, E. Natale, A. Nusser, and G. Scornavacca, “Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks,” Bulletin of the EATCS, vol. 125, 2018.
Export
BibTeX
@article{Cruciani_EATCS2018, TITLE = {Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks}, AUTHOR = {Cruciani, Emilio and Natale, Emanuele and Nusser, Andr{\'e} and Scornavacca, Giacomo}, LANGUAGE = {eng}, ISSN = {0252-9742}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Bulletin of the EATCS}, VOLUME = {125}, EID = {542}, }
Endnote
%0 Journal Article %A Cruciani, Emilio %A Natale, Emanuele %A Nusser, Andr&#233; %A Scornavacca, Giacomo %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A48F-4 %7 2018 %D 2018 %J Bulletin of the EATCS %O EATCS %V 125 %Z sequence number: 542 %@ false
[86]
M. Cygan, S. Kratsch, and J. Nederlof, “Fast Hamiltonicity Checking Via Bases of Perfect Matchings,” Journal of the ACM, vol. 65, no. 3, 2018.
Export
BibTeX
@article{Cygan2018, TITLE = {Fast {Hamiltonicity} Checking Via Bases of Perfect Matchings}, AUTHOR = {Cygan, Marek and Kratsch, Stefan and Nederlof, Jesper}, LANGUAGE = {eng}, ISSN = {0004-5411}, DOI = {10.1145/3148227}, PUBLISHER = {Association for Computing Machinery, Inc.}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Journal of the ACM}, VOLUME = {65}, NUMBER = {3}, EID = {12}, }
Endnote
%0 Journal Article %A Cygan, Marek %A Kratsch, Stefan %A Nederlof, Jesper %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Fast Hamiltonicity Checking Via Bases of Perfect Matchings : %G eng %U http://hdl.handle.net/21.11116/0000-0001-7AE5-4 %R 10.1145/3148227 %7 2018 %D 2018 %J Journal of the ACM %V 65 %N 3 %Z sequence number: 12 %I Association for Computing Machinery, Inc. %C New York, NY %@ false
[87]
R. David, C. S. Karthik, and B. Laekhanukit, “On the Complexity of Closest Pair via Polar-Pair of Point-Sets,” in 34th International Symposium on Computational Geometry (SoCG 2018), Budapest, Hungary, 2018.
Export
BibTeX
@inproceedings{David_SoCG2018, TITLE = {On the Complexity of Closest Pair via Polar-Pair of Point-Sets}, AUTHOR = {David, Roee and Karthik, C. S. and Laekhanukit, Bundit}, LANGUAGE = {eng}, ISBN = {978-3-95977-066-8}, URL = {urn:nbn:de:0030-drops-87412}, DOI = {10.4230/LIPIcs.SoCG.2018.28}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {34th International Symposium on Computational Geometry (SoCG 2018)}, EDITOR = {Speckmann, Bettina and T{\'o}th, Csaba D.}, PAGES = {1--15}, EID = {28}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {99}, ADDRESS = {Budapest, Hungary}, }
Endnote
%0 Conference Proceedings %A David, Roee %A Karthik, C. S. %A Laekhanukit, Bundit %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Complexity of Closest Pair via Polar-Pair of Point-Sets : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A827-5 %R 10.4230/LIPIcs.SoCG.2018.28 %U urn:nbn:de:0030-drops-87412 %D 2018 %B 34th International Symposium on Computational Geometry %Z date of event: 2018-06-11 - 2018-06-14 %C Budapest, Hungary %B 34th International Symposium on Computational Geometry %E Speckmann, Bettina; T&#243;th, Csaba D. %P 1 - 15 %Z sequence number: 28 %I Schloss Dagstuhl %@ 978-3-95977-066-8 %B Leibniz International Proceedings in Informatics %N 99 %U http://drops.dagstuhl.de/opus/volltexte/2018/8741/http://drops.dagstuhl.de/doku/urheberrecht1.html
[88]
L. Duraj, M. Künnemann, and A. Polak, “Tight Conditional Lower Bounds for Longest Common Increasing Subsequence,” Algorithmica, 2018.
Export
BibTeX
@article{Duraj2018, TITLE = {Tight Conditional Lower Bounds for Longest Common Increasing Subsequence}, AUTHOR = {Duraj, Lech and K{\"u}nnemann, Marvin and Polak, Adam}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-018-0485-7}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {Algorithmica}, }
Endnote
%0 Journal Article %A Duraj, Lech %A K&#252;nnemann, Marvin %A Polak, Adam %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Tight Conditional Lower Bounds for Longest Common Increasing Subsequence : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A906-9 %R 10.1007/s00453-018-0485-7 %7 2018 %D 2018 %J Algorithmica %I Springer-Verlag %C New York, NY %@ false
[89]
K. Fleszar, M. Mnich, and J. Spoerhase, “New Algorithms for Maximum Disjoint Paths Based on Tree-likeness,” Mathematical Programming / A, vol. 171, no. 1–2, 2018.
Export
BibTeX
@article{edge-disjoint-paths-mapr-17, TITLE = {New Algorithms for Maximum Disjoint Paths Based on Tree-likeness}, AUTHOR = {Fleszar, Krzysztof and Mnich, Matthias and Spoerhase, Joachim}, LANGUAGE = {eng}, ISSN = {0025-5610}, DOI = {10.1007/s10107-017-1199-3}, PUBLISHER = {North-Holland}, ADDRESS = {Heidelberg}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Mathematical Programming / A}, VOLUME = {171}, NUMBER = {1-2}, PAGES = {433--461}, }
Endnote
%0 Journal Article %A Fleszar, Krzysztof %A Mnich, Matthias %A Spoerhase, Joachim %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T New Algorithms for Maximum Disjoint Paths Based on Tree-likeness : %G eng %U http://hdl.handle.net/21.11116/0000-0000-B54C-F %R 10.1007/s10107-017-1199-3 %7 2017 %D 2018 %J Mathematical Programming / A %V 171 %N 1-2 %& 433 %P 433 - 461 %I North-Holland %C Heidelberg %@ false
[90]
P. Fraigniaud and E. Natale, “Noisy Rumor Spreading and Plurality Consensus,” Distributed Computing, vol. First Online, 2018.
Export
BibTeX
@article{Fraigniaud2018, TITLE = {Noisy Rumor Spreading and Plurality Consensus}, AUTHOR = {Fraigniaud, Pierre and Natale, Emanuele}, LANGUAGE = {eng}, ISSN = {0178-2770}, DOI = {10.1007/s00446-018-0335-5}, PUBLISHER = {Springer International}, ADDRESS = {Berlin}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {Distributed Computing}, VOLUME = {First Online}, }
Endnote
%0 Journal Article %A Fraigniaud, Pierre %A Natale, Emanuele %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Noisy Rumor Spreading and Plurality Consensus : %G eng %U http://hdl.handle.net/21.11116/0000-0002-6CD7-3 %R 10.1007/s00446-018-0335-5 %7 2018 %D 2018 %J Distributed Computing %V First Online %I Springer International %C Berlin %@ false
[91]
S. Friedrichs, M. Függer, and C. Lenzen, “Metastability-Containing Circuits,” IEEE Transactions on Computers, vol. 67, no. 8, 2018.
Abstract
Communication across unsynchronized clock domains is inherently vulnerable to metastable upsets; no digital circuit can deterministically avoid, resolve, or detect metastability (Marino, 1981). Traditionally, a possibly metastable input is stored in synchronizers, decreasing the odds of maintained metastability over time. This approach costs time, and does not guarantee success. We propose a fundamentally different approach: It is possible to \emph{contain} metastability by logical masking, so that it cannot infect the entire circuit. This technique guarantees a limited degree of metastability in---and uncertainty about---the output. We present a synchronizer-free, fault-tolerant clock synchronization algorithm as application, synchronizing clock domains and thus enabling metastability-free communication. At the heart of our approach lies a model for metastability in synchronous clocked digital circuits. Metastability is propagated in a worst-case fashion, allowing to derive deterministic guarantees, without and unlike synchronizers. The proposed model permits positive results while at the same time reproducing established impossibility results regarding avoidance, resolution, and detection of metastability. Furthermore, we fully classify which functions can be computed by synchronous circuits with standard registers, and show that masking registers are computationally strictly more powerful.
Export
BibTeX
@article{Friedrichs_Fuegger_Lenzen2018, TITLE = {Metastability-Containing Circuits}, AUTHOR = {Friedrichs, Stephan and F{\"u}gger, Matthias and Lenzen, Christoph}, LANGUAGE = {eng}, ISSN = {0018-9340}, DOI = {10.1109/TC.2018.2808185}, PUBLISHER = {IEEE}, ADDRESS = {Piscataway, NJ}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Communication across unsynchronized clock domains is inherently vulnerable to metastable upsets; no digital circuit can deterministically avoid, resolve, or detect metastability (Marino, 1981). Traditionally, a possibly metastable input is stored in synchronizers, decreasing the odds of maintained metastability over time. This approach costs time, and does not guarantee success. We propose a fundamentally different approach: It is possible to \emph{contain} metastability by logical masking, so that it cannot infect the entire circuit. This technique guarantees a limited degree of metastability in---and uncertainty about---the output. We present a synchronizer-free, fault-tolerant clock synchronization algorithm as application, synchronizing clock domains and thus enabling metastability-free communication. At the heart of our approach lies a model for metastability in synchronous clocked digital circuits. Metastability is propagated in a worst-case fashion, allowing to derive deterministic guarantees, without and unlike synchronizers. The proposed model permits positive results while at the same time reproducing established impossibility results regarding avoidance, resolution, and detection of metastability. Furthermore, we fully classify which functions can be computed by synchronous circuits with standard registers, and show that masking registers are computationally strictly more powerful.}, JOURNAL = {IEEE Transactions on Computers}, VOLUME = {67}, NUMBER = {8}, PAGES = {1167--1183}, }
Endnote
%0 Journal Article %A Friedrichs, Stephan %A F&#252;gger, Matthias %A Lenzen, Christoph %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Metastability-Containing Circuits : %G eng %U http://hdl.handle.net/21.11116/0000-0001-E5A0-7 %R 10.1109/TC.2018.2808185 %7 2018 %D 2018 %X Communication across unsynchronized clock domains is inherently vulnerable to metastable upsets; no digital circuit can deterministically avoid, resolve, or detect metastability (Marino, 1981). Traditionally, a possibly metastable input is stored in synchronizers, decreasing the odds of maintained metastability over time. This approach costs time, and does not guarantee success. We propose a fundamentally different approach: It is possible to \emph{contain} metastability by logical masking, so that it cannot infect the entire circuit. This technique guarantees a limited degree of metastability in---and uncertainty about---the output. We present a synchronizer-free, fault-tolerant clock synchronization algorithm as application, synchronizing clock domains and thus enabling metastability-free communication. At the heart of our approach lies a model for metastability in synchronous clocked digital circuits. Metastability is propagated in a worst-case fashion, allowing to derive deterministic guarantees, without and unlike synchronizers. The proposed model permits positive results while at the same time reproducing established impossibility results regarding avoidance, resolution, and detection of metastability. Furthermore, we fully classify which functions can be computed by synchronous circuits with standard registers, and show that masking registers are computationally strictly more powerful. %K Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC %J IEEE Transactions on Computers %V 67 %N 8 %& 1167 %P 1167 - 1183 %I IEEE %C Piscataway, NJ %@ false
[92]
S. Friedrichs and C. Lenzen, “Parallel Metric Tree Embedding based on an Algebraic View on Moore-Bellman-Ford,” Journal of the ACM, vol. 65, no. 6, 2018.
Export
BibTeX
@article{FriedrichsJACM2018, TITLE = {Parallel Metric Tree Embedding based on an Algebraic View on {Moore}-{Bellman}-{Ford}}, AUTHOR = {Friedrichs, Stephan and Lenzen, Christoph}, LANGUAGE = {eng}, ISSN = {0004-5411}, DOI = {10.1145/3231591}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Journal of the ACM}, VOLUME = {65}, NUMBER = {6}, EID = {43}, }
Endnote
%0 Journal Article %A Friedrichs, Stephan %A Lenzen, Christoph %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Parallel Metric Tree Embedding based on an Algebraic View on Moore-Bellman-Ford : %G eng %U http://hdl.handle.net/21.11116/0000-0002-8892-F %R 10.1145/3231591 %7 2018 %D 2018 %J Journal of the ACM %V 65 %N 6 %Z sequence number: 43 %I ACM %C New York, NY %@ false
[93]
M. Függer, A. Kinali, C. Lenzen, and B. Wiederhake, “Fast All-Digital Clock Frequency Adaptation Circuit for Voltage Droop Tolerance,” in 24th IEEE International Symposium on Asynchronous Circuits and Systems, Vienna, Austria. (Accepted/in press)
Export
BibTeX
@inproceedings{Fuegger_ASYNC2018, TITLE = {Fast All-Digital Clock Frequency Adaptation Circuit for Voltage Droop Tolerance}, AUTHOR = {F{\"u}gger, Matthias and Kinali, Attila and Lenzen, Christoph and Wiederhake, Ben}, LANGUAGE = {eng}, PUBLISHER = {IEEE}, YEAR = {2018}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {24th IEEE International Symposium on Asynchronous Circuits and Systems}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A F&#252;gger, Matthias %A Kinali, Attila %A Lenzen, Christoph %A Wiederhake, Ben %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fast All-Digital Clock Frequency Adaptation Circuit for Voltage Droop Tolerance : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9FA4-2 %D 2018 %B 24th IEEE International Symposium on Asynchronous Circuits and Systems %Z date of event: 2018-05-13 - 2018-05-16 %C Vienna, Austria %B 24th IEEE International Symposium on Asynchronous Circuits and Systems %I IEEE
[94]
J. Garg, M. Hoefer, and K. Mehlhorn, “Approximating the Nash Social Welfare with Budget-Additive Valuations,” in SODA’18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 2018.
Export
BibTeX
@inproceedings{GargHoeferMehlhornSODA18, TITLE = {Approximating the {Nash} Social Welfare with Budget-Additive Valuations}, AUTHOR = {Garg, Jugal and Hoefer, Martin and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISBN = {978-1-61197-503-1}, DOI = {10.1137/1.9781611975031.150}, PUBLISHER = {SIAM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {SODA'18, Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms}, EDITOR = {Czumaj, Artur}, PAGES = {2326--2340}, ADDRESS = {New Orleans, LA, USA}, }
Endnote
%0 Conference Proceedings %A Garg, Jugal %A Hoefer, Martin %A Mehlhorn, Kurt %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximating the Nash Social Welfare with Budget-Additive Valuations : %G eng %U http://hdl.handle.net/21.11116/0000-0000-37F9-A %R 10.1137/1.9781611975031.150 %D 2018 %B Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2018-01-07 - 2018-01-10 %C New Orleans, LA, USA %B SODA'18 %E Czumaj, Artur %P 2326 - 2340 %I SIAM %@ 978-1-61197-503-1
[95]
M. Ghaffari, A. Karrenbauer, F. Kuhn, C. Lenzen, and and B. Patt-Shamir, “Near-Optimal Distributed Maximum Flow,” SIAM Journal on Computing, vol. 47, no. 6, 2018.
Export
BibTeX
@article{GKKLP2018, TITLE = {Near-Optimal Distributed Maximum Flow}, AUTHOR = {Ghaffari, Mohsen and Karrenbauer, Andreas and Kuhn, Fabian and Lenzen, Christoph and Patt-Shamir, and Boaz}, LANGUAGE = {eng}, ISSN = {0097-5397}, DOI = {10.1137/17M113277X}, PUBLISHER = {Society for Industrial and Applied Mathematics.}, ADDRESS = {Philadelphia, PA}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {SIAM Journal on Computing}, VOLUME = {47}, NUMBER = {6}, PAGES = {2078--2117}, }
Endnote
%0 Journal Article %A Ghaffari, Mohsen %A Karrenbauer, Andreas %A Kuhn, Fabian %A Lenzen, Christoph %A Patt-Shamir, and Boaz %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Near-Optimal Distributed Maximum Flow : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A3A9-7 %R 10.1137/17M113277X %7 2018 %D 2018 %J SIAM Journal on Computing %V 47 %N 6 %& 2078 %P 2078 - 2117 %I Society for Industrial and Applied Mathematics. %C Philadelphia, PA %@ false
[96]
T. A. G. Hageman, P. A. Loethman, M. Dirnberger, M. C. Elwenspoek, A. Manz, and L. Abelmann, “Macroscopic Equivalence for Microscopic Motion in a Turbulence Driven Three-dimensional Self-assembly Reactor,” Journal of Applied Physics, vol. 123, no. 2, 2018.
Export
BibTeX
@article{Hageman2018, TITLE = {Macroscopic Equivalence for Microscopic Motion in a Turbulence Driven Three-dimensional Self-assembly Reactor}, AUTHOR = {Hageman, T. A. G. and Loethman, P. A. and Dirnberger, Michael and Elwenspoek, M. C. and Manz, A. and Abelmann, L.}, LANGUAGE = {eng}, ISSN = {0021-8979}, DOI = {10.1063/1.5007029}, PUBLISHER = {AIP Publishing}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Journal of Applied Physics}, VOLUME = {123}, NUMBER = {2}, PAGES = {1--10}, EID = {024901}, }
Endnote
%0 Journal Article %A Hageman, T. A. G. %A Loethman, P. A. %A Dirnberger, Michael %A Elwenspoek, M. C. %A Manz, A. %A Abelmann, L. %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Macroscopic Equivalence for Microscopic Motion in a Turbulence Driven Three-dimensional Self-assembly Reactor : %G eng %U http://hdl.handle.net/21.11116/0000-0000-431A-8 %R 10.1063/1.5007029 %7 2018 %D 2018 %J Journal of Applied Physics %O J. Appl. Phys. %V 123 %N 2 %& 1 %P 1 - 10 %Z sequence number: 024901 %I AIP Publishing %C New York, NY %@ false
[97]
P. Heggernes, D. Issac, J. Lauri, P. T. Lima, and E. J. van Leeuwen, “Rainbow Vertex Coloring Bipartite Graphs and Chordal Graphs,” in 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018), Liverpool, UK, 2018.
Export
BibTeX
@inproceedings{heggernes_et_al-2018-rainbow-vertex, TITLE = {Rainbow Vertex Coloring Bipartite Graphs and Chordal Graphs}, AUTHOR = {Heggernes, Pinar and Issac, Davis and Lauri, Juho and Lima, Paloma T. and van Leeuwen, Erik Jan}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-086-6}, URL = {urn:nbn:de:0030-drops-96657}, DOI = {10.4230/LIPIcs.MFCS.2018.83}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)}, EDITOR = {Potapov, Igor and Spirakis, Paul and Worrell, James}, PAGES = {1--13}, EID = {83}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {117}, ADDRESS = {Liverpool, UK}, }
Endnote
%0 Conference Proceedings %A Heggernes, Pinar %A Issac, Davis %A Lauri, Juho %A Lima, Paloma T. %A van Leeuwen, Erik Jan %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Rainbow Vertex Coloring Bipartite Graphs and Chordal Graphs : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9E4D-7 %U urn:nbn:de:0030-drops-96657 %R 10.4230/LIPIcs.MFCS.2018.83 %D 2018 %B 43rd International Symposium on Mathematical Foundations of Computer Science %Z date of event: 2018-08-27 - 2018-08-31 %C Liverpool, UK %B 43rd International Symposium on Mathematical Foundations of Computer Science %E Potapov, Igor; Spirakis, Paul; Worrell, James %P 1 - 13 %Z sequence number: 83 %I Schloss Dagstuhl %@ 978-3-95977-086-6 %B Leibniz International Proceedings in Informatics %N 117 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9665/http://drops.dagstuhl.de/doku/urheberrecht1.html
[98]
S. Heydrich, “A Tale of Two Packing Problems: Improved Algorithms and Tighter Bounds for Online Bin Packing and the Geometric Knapsack Problem,” Universität des Saarlandes, Saarbrücken, 2018.
Abstract
Abstract In this thesis, we deal with two packing problems: the online bin packing and the geometric knapsack problem. In online bin packing, the aim is to pack a given number of items of dierent size into a minimal number of containers. The items need to be packed one by one without knowing future items. For online bin packing in one dimension, we present a new family of algorithms that constitutes the rst improvement over the previously best algorithm in almost 15 years. While the algorithmic ideas are intuitive, an elaborate analysis is required to prove its competitive ratio. We also give a lower bound for the competitive ratio of this family of algorithms. For online bin packing in higher dimensions, we discuss lower bounds for the competitive ratio and show that the ideas from the one-dimensional case cannot be easily transferred to obtain better two-dimensional algorithms. In the geometric knapsack problem, one aims to pack a maximum weight subset of given rectangles into one square container. For this problem, we consider oine approximation algorithms. For geometric knapsack with square items, we improve the running time of the best known PTAS and obtain an EPTAS . This shows that large running times caused by some standard techniques for geometric packing problems are not always necessary and can be improved. Finally, we show how to use resource augmentation to compute optimal solutions in EPTAS -time, thereby improving upon the known PTAS for this case.
Export
BibTeX
@phdthesis{Heydrphd18, TITLE = {A Tale of Two Packing Problems: Improved Algorithms and Tighter Bounds for Online Bin Packing and the Geometric Knapsack Problem}, AUTHOR = {Heydrich, Sandy}, LANGUAGE = {eng}, DOI = {10.22028/D291-27240}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, ABSTRACT = {Abstract In this thesis, we deal with two packing problems: the online bin packing and the geometric knapsack problem. In online bin packing, the aim is to pack a given number of items of dierent size into a minimal number of containers. The items need to be packed one by one without knowing future items. For online bin packing in one dimension, we present a new family of algorithms that constitutes the rst improvement over the previously best algorithm in almost 15 years. While the algorithmic ideas are intuitive, an elaborate analysis is required to prove its competitive ratio. We also give a lower bound for the competitive ratio of this family of algorithms. For online bin packing in higher dimensions, we discuss lower bounds for the competitive ratio and show that the ideas from the one-dimensional case cannot be easily transferred to obtain better two-dimensional algorithms. In the geometric knapsack problem, one aims to pack a maximum weight subset of given rectangles into one square container. For this problem, we consider oine approximation algorithms. For geometric knapsack with square items, we improve the running time of the best known PTAS and obtain an EPTAS . This shows that large running times caused by some standard techniques for geometric packing problems are not always necessary and can be improved. Finally, we show how to use resource augmentation to compute optimal solutions in EPTAS -time, thereby improving upon the known PTAS for this case.}, }
Endnote
%0 Thesis %A Heydrich, Sandy %Y van Stee, Rob %A referee: Mehlhorn, Kurt %A referee: Grandoni, Fabrizio %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Discrete Optimization, MPI for Informatics, Max Planck Society %T A Tale of Two Packing Problems: Improved Algorithms and Tighter Bounds for Online Bin Packing and the Geometric Knapsack Problem : %G eng %U http://hdl.handle.net/21.11116/0000-0001-E3DC-7 %R 10.22028/D291-27240 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2018 %P viii, 161 p. %V phd %9 phd %X Abstract In this thesis, we deal with two packing problems: the online bin packing and the geometric knapsack problem. In online bin packing, the aim is to pack a given number of items of dierent size into a minimal number of containers. The items need to be packed one by one without knowing future items. For online bin packing in one dimension, we present a new family of algorithms that constitutes the rst improvement over the previously best algorithm in almost 15 years. While the algorithmic ideas are intuitive, an elaborate analysis is required to prove its competitive ratio. We also give a lower bound for the competitive ratio of this family of algorithms. For online bin packing in higher dimensions, we discuss lower bounds for the competitive ratio and show that the ideas from the one-dimensional case cannot be easily transferred to obtain better two-dimensional algorithms. In the geometric knapsack problem, one aims to pack a maximum weight subset of given rectangles into one square container. For this problem, we consider oine approximation algorithms. For geometric knapsack with square items, we improve the running time of the best known PTAS and obtain an EPTAS . This shows that large running times caused by some standard techniques for geometric packing problems are not always necessary and can be improved. Finally, we show how to use resource augmentation to compute optimal solutions in EPTAS -time, thereby improving upon the known PTAS for this case. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/27141
[99]
M. Hoefer, D. Vaz, and L. Wagner, “Dynamics in Matching and Coalition Formation Games with Structural Constraints,” Artificial Intelligence, vol. 262, 2018.
Export
BibTeX
@article{Hoefer_2018, TITLE = {Dynamics in Matching and Coalition Formation Games with Structural Constraints}, AUTHOR = {Hoefer, Martin and Vaz, Daniel and Wagner, Lisa}, LANGUAGE = {eng}, ISSN = {0004-3702}, DOI = {10.1016/j.artint.2018.06.004}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Artificial Intelligence}, VOLUME = {262}, PAGES = {222--247}, }
Endnote
%0 Journal Article %A Hoefer, Martin %A Vaz, Daniel %A Wagner, Lisa %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Dynamics in Matching and Coalition Formation Games with Structural Constraints : %G eng %U http://hdl.handle.net/21.11116/0000-0002-02F6-6 %R 10.1016/j.artint.2018.06.004 %7 2018 %D 2018 %J Artificial Intelligence %V 262 %& 222 %P 222 - 247 %I Elsevier %C Amsterdam %@ false
[100]
W. Höhn, J. Mestre, and A. Wiese, “How Unsplittable-flow-covering Helps Scheduling with Job-dependent Cost Functions,” Algorithmica, vol. 80, no. 4, 2018.
Export
BibTeX
@article{Hoehn2017, TITLE = {How Unsplittable-flow-covering Helps Scheduling with Job-dependent Cost Functions}, AUTHOR = {H{\"o}hn, Wiebke and Mestre, Julian and Wiese, Andreas}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-017-0300-x}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Algorithmica}, VOLUME = {80}, NUMBER = {4}, PAGES = {1191--1213}, }
Endnote
%0 Journal Article %A H&#246;hn, Wiebke %A Mestre, Julian %A Wiese, Andreas %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T How Unsplittable-flow-covering Helps Scheduling with Job-dependent Cost Functions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-2618-3 %R 10.1007/s00453-017-0300-x %7 2017 %D 2018 %J Algorithmica %V 80 %N 4 %& 1191 %P 1191 - 1213 %I Springer-Verlag %C New York, NY %@ false
[101]
C. Ikenmeyer, B. Komarath, C. Lenzen, V. Lysikov, A. Mokhov, and K. Sreenivasaiah, “On the Complexity of Hazard-free Circuits,” in STOC’18, 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 2018.
Export
BibTeX
@inproceedings{Ikenmeyer_STOC2018, TITLE = {On the Complexity of Hazard-free Circuits}, AUTHOR = {Ikenmeyer, Christian and Komarath, Balagopal and Lenzen, Christoph and Lysikov, Vladimir and Mokhov, Andrey and Sreenivasaiah, Karteek}, LANGUAGE = {eng}, ISBN = {978-1-4503-5559-9}, DOI = {10.1145/3188745.3188912}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {STOC'18, 50th Annual ACM SIGACT Symposium on Theory of Computing}, PAGES = {878--889}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Ikenmeyer, Christian %A Komarath, Balagopal %A Lenzen, Christoph %A Lysikov, Vladimir %A Mokhov, Andrey %A Sreenivasaiah, Karteek %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T On the Complexity of Hazard-free Circuits : %G eng %U http://hdl.handle.net/21.11116/0000-0002-17E1-6 %R 10.1145/3188745.3188912 %D 2018 %B 50th Annual ACM SIGACT Symposium on Theory of Computing %Z date of event: 2018-06-25 - 2017-06-29 %C Los Angeles, CA, USA %B STOC'18 %P 878 - 889 %I ACM %@ 978-1-4503-5559-9
[102]
C. Ikenmeyer and S. Mengel, “On the Relative Power of Reduction Notions in Arithmetic Circuit Complexity,” Information Processing Letters, vol. 130, 2018.
Export
BibTeX
@article{Ikenmeyer2018, TITLE = {On the Relative Power of Reduction Notions in Arithmetic Circuit Complexity}, AUTHOR = {Ikenmeyer, Christian and Mengel, Stefan}, LANGUAGE = {eng}, ISSN = {0020-0190}, DOI = {10.1016/j.ipl.2017.09.009}, PUBLISHER = {Elsevier}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Information Processing Letters}, VOLUME = {130}, PAGES = {7--10}, }
Endnote
%0 Journal Article %A Ikenmeyer, Christian %A Mengel, Stefan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On the Relative Power of Reduction Notions in Arithmetic Circuit Complexity : %G eng %U http://hdl.handle.net/21.11116/0000-0000-0361-F %R 10.1016/j.ipl.2017.09.009 %7 2017 %D 2018 %J Information Processing Letters %V 130 %& 7 %P 7 - 10 %I Elsevier %@ false
[103]
C. S. Karthik, B. Laekhanukit, and P. Manurangsi, “On the Parameterized Complexity of Approximating Dominating Set,” in STOC’18, 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 2018.
Export
BibTeX
@inproceedings{Karthik_STOC2018, TITLE = {On the Parameterized Complexity of Approximating Dominating Set}, AUTHOR = {Karthik, C. S. and Laekhanukit, Bundit and Manurangsi, Pasin}, LANGUAGE = {eng}, ISBN = {978-1-4503-5559-9}, DOI = {10.1145/3188745.3188896}, PUBLISHER = {ACM}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {STOC'18, 50th Annual ACM SIGACT Symposium on Theory of Computing}, PAGES = {1283--1296}, ADDRESS = {Los Angeles, CA, USA}, }
Endnote
%0 Conference Proceedings %A Karthik, C. S. %A Laekhanukit, Bundit %A Manurangsi, Pasin %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On the Parameterized Complexity of Approximating Dominating Set : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A81D-1 %R 10.1145/3188745.3188896 %D 2018 %B 50th Annual ACM SIGACT Symposium on Theory of Computing %Z date of event: 2018-06-25 - 2017-06-29 %C Los Angeles, CA, USA %B STOC'18 %P 1283 - 1296 %I ACM %@ 978-1-4503-5559-9
[104]
P. Khanchandani and C. Lenzen, “Self-Stabilizing Byzantine Clock Synchronization with Optimal Precision,” Theory of Computing Systems, 2018.
Export
BibTeX
@article{_Khanchandani2018, TITLE = {Self-Stabilizing {B}yzantine Clock Synchronization with Optimal Precision}, AUTHOR = {Khanchandani, Pankaj and Lenzen, Christoph}, LANGUAGE = {eng}, ISSN = {1432-4350}, DOI = {10.1007/s00224-017-9840-3}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {Theory of Computing Systems}, }
Endnote
%0 Journal Article %A Khanchandani, Pankaj %A Lenzen, Christoph %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Self-Stabilizing Byzantine Clock Synchronization with Optimal Precision : %G eng %U http://hdl.handle.net/21.11116/0000-0000-73AC-D %R 10.1007/s00224-017-9840-3 %7 2018-01-20 %D 2018 %8 20.01.2018 %J Theory of Computing Systems %I Springer %C New York, NY %@ false
[105]
A. Kinali, “A Physical Sine-to-Square Converter Noise Model,” in IEEE International Frequency Control Symposium (IFCS 2018), Olympic Valley, CA, USA. (Accepted/in press)
Export
BibTeX
@inproceedings{Kinali_IFCS2018, TITLE = {A Physical Sine-to-Square Converter Noise Model}, AUTHOR = {Kinali, Attila}, LANGUAGE = {eng}, YEAR = {2018}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {IEEE International Frequency Control Symposium (IFCS 2018)}, ADDRESS = {Olympic Valley, CA, USA}, }
Endnote
%0 Conference Proceedings %A Kinali, Attila %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Physical Sine-to-Square Converter Noise Model : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AC39-D %D 2018 %B IEEE International Frequency Control Symposium %Z date of event: 2018-05-21 - 2018-05-24 %C Olympic Valley, CA, USA %B IEEE International Frequency Control Symposium
[106]
P. Koprowski, K. Mehlhorn, and S. Ray, “Corrigendum to ‘Faster algorithms for computing Hong’s bound on absolute positiveness’ [J. Symb. Comput. 45 (2010) 677–683],” Journal of Symbolic Computation, vol. 87, 2018.
Export
BibTeX
@article{Koprowski2018, TITLE = {Corrigendum to {\textquotedblleft}Faster algorithms for computing Hong's bound on absolute positiveness{\textquotedblright} [J. Symb. Comput. 45 (2010) 677--683]}, AUTHOR = {Koprowski, Przemys{\l}aw and Mehlhorn, Kurt and Ray, Saurabh}, LANGUAGE = {eng}, ISSN = {0747-7171}, DOI = {10.1016/j.jsc.2017.05.008}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Journal of Symbolic Computation}, VOLUME = {87}, PAGES = {238--241}, }
Endnote
%0 Journal Article %A Koprowski, Przemys&#322;aw %A Mehlhorn, Kurt %A Ray, Saurabh %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Corrigendum to &#8220;Faster algorithms for computing Hong's bound on absolute positiveness&#8221; [J. Symb. Comput. 45 (2010) 677&#8211;683] : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3C55-D %R 10.1016/j.jsc.2017.05.008 %7 2017 %D 2018 %J Journal of Symbolic Computation %V 87 %& 238 %P 238 - 241 %I Elsevier %C Amsterdam %@ false
[107]
M. Künnemann, “On Nondeterministic Derandomization of Freivalds’ Algorithm: Consequences, Avenues and Algorithmic Progress,” in 26th Annual European Symposium on Algorithms (ESA 2018), Helsinki, Finland, 2018.
Export
BibTeX
@inproceedings{Kuennemann_ESA2018, TITLE = {On Nondeterministic Derandomization of {F}reivalds' Algorithm: {C}onsequences, Avenues and Algorithmic Progress}, AUTHOR = {K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-081-1}, URL = {urn:nbn:de:0030-drops-95195}, DOI = {10.4230/LIPIcs.ESA.2018.56}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {26th Annual European Symposium on Algorithms (ESA 2018)}, EDITOR = {Azar, Yossi and Bast, Hannah and Herman, Grzegorz}, PAGES = {1--16}, EID = {56}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {112}, ADDRESS = {Helsinki, Finland}, }
Endnote
%0 Conference Proceedings %A K&#252;nnemann, Marvin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A8F1-0 %R 10.4230/LIPIcs.ESA.2018.56 %U urn:nbn:de:0030-drops-95195 %D 2018 %B 26th Annual European Symposium on Algorithms %Z date of event: 2018-08-20 - 2018-08-22 %C Helsinki, Finland %B 26th Annual European Symposium on Algorithms %E Azar, Yossi; Bast, Hannah; Herman, Grzegorz %P 1 - 16 %Z sequence number: 56 %I Schloss Dagstuhl %@ 978-3-95977-081-1 %B Leibniz International Proceedings in Informatics %N 112 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9519/http://drops.dagstuhl.de/doku/urheberrecht1.html
[108]
M. Künnemann, “On Nondeterministic Derandomization of Freivalds’ Algorithm: Consequences, Avenues and Algorithmic Progress,” 2018. [Online]. Available: http://arxiv.org/abs/1806.09189. (arXiv: 1806.09189)
Abstract
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two $n\times n$ matrices can be performed in near-optimal nondeterministic time $\tilde{O}(n^2)$. Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time $O(n^2)$, our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and $n$ erroneous entries can be performed in time $\tilde{O}(n^2)$ -- interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most $t$ errors in time $\tilde{O}(\sqrt{t} n^2 + t^2)$. To obtain this result, we show how to compute an integer matrix product with at most $t$ nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for $t = \Omega(n^{2/3})$ nonzeroes, which is of independent interest.
Export
BibTeX
@online{Kuennemann_arXiv1806.09189, TITLE = {On Nondeterministic Derandomization of {F}reivalds' Algorithm: {C}onsequences, Avenues and Algorithmic Progress}, AUTHOR = {K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1806.09189}, EPRINT = {1806.09189}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two $n\times n$ matrices can be performed in near-optimal nondeterministic time $\tilde{O}(n^2)$. Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time $O(n^2)$, our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and $n$ erroneous entries can be performed in time $\tilde{O}(n^2)$ -- interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most $t$ errors in time $\tilde{O}(\sqrt{t} n^2 + t^2)$. To obtain this result, we show how to compute an integer matrix product with at most $t$ nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for $t = \Omega(n^{2/3})$ nonzeroes, which is of independent interest.}, }
Endnote
%0 Report %A K&#252;nnemann, Marvin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Nondeterministic Derandomization of Freivalds' Algorithm: Consequences, Avenues and Algorithmic Progress : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A8F5-C %U http://arxiv.org/abs/1806.09189 %D 2018 %X Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two $n\times n$ matrices can be performed in near-optimal nondeterministic time $\tilde{O}(n^2)$. Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time $O(n^2)$, our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and $n$ erroneous entries can be performed in time $\tilde{O}(n^2)$ -- interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most $t$ errors in time $\tilde{O}(\sqrt{t} n^2 + t^2)$. To obtain this result, we show how to compute an integer matrix product with at most $t$ nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for $t = \Omega(n^{2/3})$ nonzeroes, which is of independent interest. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Computational Complexity, cs.CC
[109]
A. Kurpisz, M. Mastrolilli, C. Mathieu, T. Mömke, V. Verdugo, and A. Wiese, “Semidefinite and Linear Programming Integrality Gaps for Scheduling Identical Machines,” Mathematical Programming / B, vol. 172, no. 1–2, 2018.
Export
BibTeX
@article{Kurpisz2018, TITLE = {Semidefinite and Linear Programming Integrality Gaps for Scheduling Identical Machines}, AUTHOR = {Kurpisz, Adam and Mastrolilli, Monaldo and Mathieu, Claire and M{\"o}mke, Tobias and Verdugo, Victor and Wiese, Andreas}, LANGUAGE = {eng}, ISSN = {0025-5610}, DOI = {10.1007/s10107-017-1152-5}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Mathematical Programming / B}, VOLUME = {172}, NUMBER = {1-2}, PAGES = {231--248}, }
Endnote
%0 Journal Article %A Kurpisz, Adam %A Mastrolilli, Monaldo %A Mathieu, Claire %A M&#246;mke, Tobias %A Verdugo, Victor %A Wiese, Andreas %+ External Organizations External Organizations External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Semidefinite and Linear Programming Integrality Gaps for Scheduling Identical Machines : %G eng %U http://hdl.handle.net/21.11116/0000-0002-6BCF-E %R 10.1007/s10107-017-1152-5 %7 2017 %D 2018 %J Mathematical Programming / B %V 172 %N 1-2 %& 231 %P 231 - 248 %@ false
[110]
J.-H. Lange, A. Karrenbauer, and B. Andres, “Partial Optimality and Fast Lower Bounds for Weighted Correlation Clustering,” in Proceedings of the 35th International Conference on Machine Learning (ICML 2018), Stockholm, Sweden, 2018.
Export
BibTeX
@inproceedings{pmlr-v80-lange18a, TITLE = {Partial Optimality and Fast Lower Bounds for Weighted Correlation Clustering}, AUTHOR = {Lange, Jan-Hendrik and Karrenbauer, Andreas and Andres, Bjoern}, LANGUAGE = {eng}, ISSN = {1938-7228}, URL = {http://proceedings.mlr.press/v80/lange18a.html}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the 35th International Conference on Machine Learning (ICML 2018)}, EDITOR = {Dy, Jennifer and Krause, Andreas}, PAGES = {2898--2907}, SERIES = {Proceedings of Machine Learning Research}, VOLUME = {80}, ADDRESS = {Stockholm, Sweden}, }
Endnote
%0 Conference Proceedings %A Lange, Jan-Hendrik %A Karrenbauer, Andreas %A Andres, Bjoern %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Partial Optimality and Fast Lower Bounds for Weighted Correlation Clustering : %G eng %U http://hdl.handle.net/21.11116/0000-0001-A71C-4 %U http://proceedings.mlr.press/v80/lange18a.html %D 2018 %B 35th International Conference on Machine Learning %Z date of event: 2018-07-10 - 2018-07-15 %C Stockholm, Sweden %B Proceedings of the 35th International Conference on Machine Learning %E Dy, Jennifer; Krause, Andreas %P 2898 - 2907 %B Proceedings of Machine Learning Research %N 80 %@ false %U http://proceedings.mlr.press/v80/lange18a/lange18a.pdf
[111]
C. Lenzen and R. Levi, “A Centralized Local Algorithm for the Sparse Spanning Graph Problem,” in 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), Prague, Czech Republic, 2018.
Export
BibTeX
@inproceedings{Lenzen_ICALP2018, TITLE = {A Centralized Local Algorithm for the Sparse Spanning Graph Problem}, AUTHOR = {Lenzen, Christoph and Levi, Reut}, LANGUAGE = {eng}, ISBN = {978-3-95977-076-7}, URL = {urn:nbn:de:0030-drops-90919}, DOI = {10.4230/LIPIcs.ICALP.2018.87}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, EDITOR = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D{\'a}niel and Sannella, Donald}, PAGES = {1--47}, EID = {87}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {107}, ADDRESS = {Prague, Czech Republic}, }
Endnote
%0 Conference Proceedings %A Lenzen, Christoph %A Levi, Reut %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Centralized Local Algorithm for the Sparse Spanning Graph Problem : %G eng %U http://hdl.handle.net/21.11116/0000-0002-17EF-8 %R 10.4230/LIPIcs.ICALP.2018.87 %U urn:nbn:de:0030-drops-90919 %D 2018 %B 45th International Colloquium on Automata, Languages, and Programming %Z date of event: 2018-07-09 - 2018-07-13 %C Prague, Czech Republic %B 45th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Kaklamanis, Christos; Marx, D&#225;niel; Sannella, Donald %P 1 - 47 %Z sequence number: 87 %I Schloss Dagstuhl %@ 978-3-95977-076-7 %B Leibniz International Proceedings in Informatics %N 107 %U http://drops.dagstuhl.de/opus/volltexte/2018/9091/http://drops.dagstuhl.de/doku/urheberrecht1.html
[112]
C. Lenzen, B. Patt-Shamir, and D. Peleg, “Distributed Distance Computation and Routing with Small Messages,” Distributed Computing, vol. First Online, 2018.
Export
BibTeX
@article{Lenzen_DC2018, TITLE = {Distributed Distance Computation and Routing with Small Messages}, AUTHOR = {Lenzen, Christoph and Patt-Shamir, Boaz and Peleg, David}, LANGUAGE = {eng}, ISSN = {0178-2770}, DOI = {10.1007/s00446-018-0326-6}, PUBLISHER = {Springer International}, ADDRESS = {Berlin}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {Distributed Computing}, VOLUME = {First Online}, }
Endnote
%0 Journal Article %A Lenzen, Christoph %A Patt-Shamir, Boaz %A Peleg, David %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Distributed Distance Computation and Routing with Small Messages : %G eng %U http://hdl.handle.net/21.11116/0000-0002-6CD1-9 %R 10.1007/s00446-018-0326-6 %7 2018 %D 2018 %J Distributed Computing %V First Online %I Springer International %C Berlin %@ false
[113]
E. Natale, “On the Computational Power of Simple Dynamics,” Bulletin of the EATCS, vol. 124, 2018.
Export
BibTeX
@article{Natale_EATCS2018, TITLE = {On the Computational Power of Simple Dynamics}, AUTHOR = {Natale, Emanuele}, LANGUAGE = {eng}, PUBLISHER = {EATCS}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, JOURNAL = {Bulletin of the EATCS}, VOLUME = {124}, EID = {526}, }
Endnote
%0 Journal Article %A Natale, Emanuele %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Computational Power of Simple Dynamics : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A93E-B %7 2018 %D 2018 %J Bulletin of the EATCS %V 124 %Z sequence number: 526 %I EATCS %U http://eatcs.org/beatcs/index.php/beatcs/article/view/526
[114]
E. Oh, “Point Location in Incremental Planar Subdivisions,” in 29th International Symposium on Algorithms and Computation (ISAAC 2018), Jiaoxi, Yilan, Taiwan, 2018.
Export
BibTeX
@inproceedings{Oh_ISAAC2018, TITLE = {Point Location in Incremental Planar Subdivisions}, AUTHOR = {Oh, Eunjin}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-094-1}, URL = {urn:nbn:de:0030-drops-99991}, DOI = {10.4230/LIPIcs.ISAAC.2018.51}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {29th International Symposium on Algorithms and Computation (ISAAC 2018)}, EDITOR = {Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou}, PAGES = {1--12}, EID = {51}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {123}, ADDRESS = {Jiaoxi, Yilan, Taiwan}, }
Endnote
%0 Conference Proceedings %A Oh, Eunjin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Point Location in Incremental Planar Subdivisions : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA7B-5 %R 10.4230/LIPIcs.ISAAC.2018.51 %U urn:nbn:de:0030-drops-99991 %D 2018 %B 29th International Symposium on Algorithms and Computation %Z date of event: 2018-12-16 - 2018-12-19 %C Jiaoxi, Yilan, Taiwan %B 29th International Symposium on Algorithms and Computation %E Hsu, Wen-Lian; Lee, Der-Tsai; Liao, Chung-Shou %P 1 - 12 %Z sequence number: 51 %I Schloss Dagstuhl %@ 978-3-95977-094-1 %B Leibniz International Proceedings in Informatics %N 123 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9999/http://drops.dagstuhl.de/doku/urheberrecht1.html
[115]
E. Oh and H.-K. Ahn, “Point Location in Dynamic Planar Subdivisions,” in 34th International Symposium on Computational Geometry (SoCG 2018), Budapest, Hungary, 2018.
Export
BibTeX
@inproceedings{Oh_SoCG2018b, TITLE = {Point Location in Dynamic Planar Subdivisions}, AUTHOR = {Oh, Eunjin and Ahn, Hee-Kap}, LANGUAGE = {eng}, ISBN = {978-3-95977-066-8}, URL = {urn:nbn:de:0030-drops-87769}, DOI = {10.4230/LIPIcs.SoCG.2018.63}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {34th International Symposium on Computational Geometry (SoCG 2018)}, EDITOR = {Speckmann, Bettina and T{\'o}th, Csaba D.}, PAGES = {1--14}, EID = {63}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {99}, ADDRESS = {Budapest, Hungary}, }
Endnote
%0 Conference Proceedings %A Oh, Eunjin %A Ahn, Hee-Kap %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Point Location in Dynamic Planar Subdivisions : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA86-7 %R 10.4230/LIPIcs.SoCG.2018.63 %U urn:nbn:de:0030-drops-87769 %D 2018 %B 34th International Symposium on Computational Geometry %Z date of event: 2018-06-11 - 2018-06-14 %C Budapest, Hungary %B 34th International Symposium on Computational Geometry %E Speckmann, Bettina; T&#243;th, Csaba D. %P 1 - 14 %Z sequence number: 63 %I Schloss Dagstuhl %@ 978-3-95977-066-8 %B Leibniz International Proceedings in Informatics %N 99 %U http://drops.dagstuhl.de/opus/volltexte/2018/8776/http://drops.dagstuhl.de/doku/urheberrecht1.html
[116]
E. Oh, “Minimizing Distance-to-Sight in Polygonal Domains,” in 29th International Symposium on Algorithms and Computation (ISAAC 2018), Jiaoxi, Yilan, Taiwan, 2018.
Export
BibTeX
@inproceedings{Oh_ISAAC2018b, TITLE = {Minimizing Distance-to-Sight in Polygonal Domains}, AUTHOR = {Oh, Eunjin}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-094-1}, URL = {urn:nbn:de:0030-drops-100073}, DOI = {10.4230/LIPIcs.ISAAC.2018.59}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {29th International Symposium on Algorithms and Computation (ISAAC 2018)}, EDITOR = {Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou}, PAGES = {1--12}, EID = {59}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {123}, ADDRESS = {Jiaoxi, Yilan, Taiwan}, }
Endnote
%0 Conference Proceedings %A Oh, Eunjin %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Minimizing Distance-to-Sight in Polygonal Domains : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA7D-3 %R 10.4230/LIPIcs.ISAAC.2018.59 %U urn:nbn:de:0030-drops-100073 %D 2018 %B 29th International Symposium on Algorithms and Computation %Z date of event: 2018-12-16 - 2018-12-19 %C Jiaoxi, Yilan, Taiwan %B 29th International Symposium on Algorithms and Computation %E Hsu, Wen-Lian; Lee, Der-Tsai; Liao, Chung-Shou %P 1 - 12 %Z sequence number: 59 %I Schloss Dagstuhl %@ 978-3-95977-094-1 %B Leibniz International Proceedings in Informatics %N 123 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/10007/http://drops.dagstuhl.de/doku/urheberrecht1.html
[117]
E. Oh and H.-K. Ahn, “Approximate Range Queries for Clustering,” in 34th International Symposium on Computational Geometry (SoCG 2018), Budapest, Hungary, 2018.
Export
BibTeX
@inproceedings{Oh_SoCG2018, TITLE = {Approximate Range Queries for Clustering}, AUTHOR = {Oh, Eunjin and Ahn, Hee-Kap}, LANGUAGE = {eng}, ISBN = {978-3-95977-066-8}, URL = {urn:nbn:de:0030-drops-87755}, DOI = {10.4230/LIPIcs.SoCG.2018.62}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {34th International Symposium on Computational Geometry (SoCG 2018)}, EDITOR = {Speckmann, Bettina and T{\'o}th, Csaba D.}, PAGES = {1--14}, EID = {62}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {99}, ADDRESS = {Budapest, Hungary}, }
Endnote
%0 Conference Proceedings %A Oh, Eunjin %A Ahn, Hee-Kap %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Approximate Range Queries for Clustering : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA83-A %R 10.4230/LIPIcs.SoCG.2018.62 %U urn:nbn:de:0030-drops-87755 %D 2018 %B 34th International Symposium on Computational Geometry %Z date of event: 2018-06-11 - 2018-06-14 %C Budapest, Hungary %B 34th International Symposium on Computational Geometry %E Speckmann, Bettina; T&#243;th, Csaba D. %P 1 - 14 %Z sequence number: 62 %I Schloss Dagstuhl %@ 978-3-95977-066-8 %B Leibniz International Proceedings in Informatics %N 99 %U http://drops.dagstuhl.de/opus/volltexte/2018/8775/http://drops.dagstuhl.de/doku/urheberrecht1.html
[118]
A. Oulasvirta and A. Karrenbauer, “Combinatorial Optimization for UI Design,” in Computational Interaction, Oxford, UK: Oxford University Press, 2018.
Export
BibTeX
@incollection{Oulasvirta2018OUP, TITLE = {Combinatorial Optimization for {UI} Design}, AUTHOR = {Oulasvirta, Antti and Karrenbauer, Andreas}, LANGUAGE = {eng}, ISBN = {9780198799610}, PUBLISHER = {Oxford University Press}, ADDRESS = {Oxford, UK}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, BOOKTITLE = {Computational Interaction}, EDITOR = {Oulasvirta, Antti and Kristensson, Per Ola and Bi, Xiaojun and Howes, Andrew}, PAGES = {97--120}, }
Endnote
%0 Book Section %A Oulasvirta, Antti %A Karrenbauer, Andreas %+ Computer Graphics, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Combinatorial Optimization for UI Design : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA65-D %D 2018 %B Computational Interaction %E Oulasvirta, Antti; Kristensson, Per Ola; Bi, Xiaojun; Howes, Andrew %P 97 - 120 %I Oxford University Press %C Oxford, UK %@ 9780198799610
[119]
B. Ray Chaudhury, Y. K. Cheung, J. Garg, N. Garg, M. Hoefer, and K. Mehlhorn, “On Fair Division of Indivisible Items,” 2018. [Online]. Available: http://arxiv.org/abs/1805.06232. (arXiv: 1805.06232)
Abstract
We consider the task of assigning indivisible goods to a set of agents in a fair manner. Our notion of fairness is Nash social welfare, i.e., the goal is to maximize the geometric mean of the utilities of the agents. Each good comes in multiple items or copies, and the utility of an agent diminishes as it receives more items of the same good. The utility of a bundle of items for an agent is the sum of the utilities of the items in the bundle. Each agent has a utility cap beyond which he does not value additional items. We give a polynomial time approximation algorithm that maximizes Nash social welfare up to a factor of $e^{1/e} \approx 1.445$.
Export
BibTeX
@online{Chaudhury_arXiv1805.06232, TITLE = {On Fair Division of Indivisible Items}, AUTHOR = {Ray Chaudhury, Bhaskar and Cheung, Yun Kuen and Garg, Jugal and Garg, Naveen and Hoefer, Martin and Mehlhorn, Kurt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1805.06232}, EPRINT = {1805.06232}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We consider the task of assigning indivisible goods to a set of agents in a fair manner. Our notion of fairness is Nash social welfare, i.e., the goal is to maximize the geometric mean of the utilities of the agents. Each good comes in multiple items or copies, and the utility of an agent diminishes as it receives more items of the same good. The utility of a bundle of items for an agent is the sum of the utilities of the items in the bundle. Each agent has a utility cap beyond which he does not value additional items. We give a polynomial time approximation algorithm that maximizes Nash social welfare up to a factor of $e^{1/e} \approx 1.445$.}, }
Endnote
%0 Report %A Ray Chaudhury, Bhaskar %A Cheung, Yun Kuen %A Garg, Jugal %A Garg, Naveen %A Hoefer, Martin %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Fair Division of Indivisible Items : %G eng %U http://hdl.handle.net/21.11116/0000-0002-05E7-4 %U http://arxiv.org/abs/1805.06232 %D 2018 %X We consider the task of assigning indivisible goods to a set of agents in a fair manner. Our notion of fairness is Nash social welfare, i.e., the goal is to maximize the geometric mean of the utilities of the agents. Each good comes in multiple items or copies, and the utility of an agent diminishes as it receives more items of the same good. The utility of a bundle of items for an agent is the sum of the utilities of the items in the bundle. Each agent has a utility cap beyond which he does not value additional items. We give a polynomial time approximation algorithm that maximizes Nash social welfare up to a factor of $e^{1/e} \approx 1.445$. %K Computer Science, Data Structures and Algorithms, cs.DS
[120]
B. Ray Chaudhury and K. Mehlhorn, “Combinatorial Algorithms for General Linear Arrow-Debreu Markets,” 2018. [Online]. Available: http://arxiv.org/abs/1810.01237. (arXiv: 1810.01237)
Abstract
We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear Arrow-Debreu markets consider the case where each agent can own all of one good only. We present an $\tilde{\mathcal{O}}((n+m)^7 \log^3(UW))$ algorithm where $n$, $m$, $U$ and $W$ refer to the number of agents, the number of goods, the maximal integral utility and the maximum quantity of any good in the market respectively. The algorithm refines the iterative algorithm of Duan, Garg and Mehlhorn using several new ideas. We also identify the hard instances for existing combinatorial algorithms for linear Arrow-Debreu markets. In particular we find instances where the ratio of the maximum to the minimum equilibrium price of a good is $U^{\Omega(n)}$ and the number of iterations required by the existing iterative combinatorial algorithms of Duan, and Mehlhorn and Duan, Garg, and Mehlhorn are high. Our instances also separate the two algorithms.
Export
BibTeX
@online{RayChaudhury_arxiv1810.01237, TITLE = {Combinatorial Algorithms for General Linear {Arrow}-{Debreu} Markets}, AUTHOR = {Ray Chaudhury, Bhaskar and Mehlhorn, Kurt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1810.01237}, EPRINT = {1810.01237}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear Arrow-Debreu markets consider the case where each agent can own all of one good only. We present an $\tilde{\mathcal{O}}((n+m)^7 \log^3(UW))$ algorithm where $n$, $m$, $U$ and $W$ refer to the number of agents, the number of goods, the maximal integral utility and the maximum quantity of any good in the market respectively. The algorithm refines the iterative algorithm of Duan, Garg and Mehlhorn using several new ideas. We also identify the hard instances for existing combinatorial algorithms for linear Arrow-Debreu markets. In particular we find instances where the ratio of the maximum to the minimum equilibrium price of a good is $U^{\Omega(n)}$ and the number of iterations required by the existing iterative combinatorial algorithms of Duan, and Mehlhorn and Duan, Garg, and Mehlhorn are high. Our instances also separate the two algorithms.}, }
Endnote
%0 Report %A Ray Chaudhury, Bhaskar %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Combinatorial Algorithms for General Linear Arrow-Debreu Markets : %G eng %U http://hdl.handle.net/21.11116/0000-0002-57B5-0 %U http://arxiv.org/abs/1810.01237 %D 2018 %8 02.10.2018 %X We present a combinatorial algorithm for determining the market clearing prices of a general linear Arrow-Debreu market, where every agent can own multiple goods. The existing combinatorial algorithms for linear Arrow-Debreu markets consider the case where each agent can own all of one good only. We present an $\tilde{\mathcal{O}}((n+m)^7 \log^3(UW))$ algorithm where $n$, $m$, $U$ and $W$ refer to the number of agents, the number of goods, the maximal integral utility and the maximum quantity of any good in the market respectively. The algorithm refines the iterative algorithm of Duan, Garg and Mehlhorn using several new ideas. We also identify the hard instances for existing combinatorial algorithms for linear Arrow-Debreu markets. In particular we find instances where the ratio of the maximum to the minimum equilibrium price of a good is $U^{\Omega(n)}$ and the number of iterations required by the existing iterative combinatorial algorithms of Duan, and Mehlhorn and Duan, Garg, and Mehlhorn are high. Our instances also separate the two algorithms. %K Computer Science, Computer Science and Game Theory, cs.GT,
[121]
B. Ray Chaudhury, Y. K. Cheung, J. Garg, N. Garg, M. Hoefer, and K. Mehlhorn, “On Fair Division for Indivisible Items,” in 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018), Ahmedabad, India, 2018.
Export
BibTeX
@inproceedings{Chaudhury_FSTTCS2018b, TITLE = {On Fair Division for Indivisible Items}, AUTHOR = {Ray Chaudhury, Bhaskar and Cheung, Yun Kuen and Garg, Jugal and Garg, Naveen and Hoefer, Martin and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISSN = {1868-896}, ISBN = {978-3-95977-093-4}, URL = {urn:nbn:de:0030-drops-99242}, DOI = {10.4230/LIPIcs.FSTTCS.2018.25}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)}, EDITOR = {Ganguly, Sumit and Pandya, Paritosh}, PAGES = {1--17}, EID = {25}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {122}, ADDRESS = {Ahmedabad, India}, }
Endnote
%0 Conference Proceedings %A Ray Chaudhury, Bhaskar %A Cheung, Yun Kuen %A Garg, Jugal %A Garg, Naveen %A Hoefer, Martin %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On Fair Division for Indivisible Items : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AAE1-0 %R 10.4230/LIPIcs.FSTTCS.2018.25 %U urn:nbn:de:0030-drops-99242 %D 2018 %B 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science %Z date of event: 2018-12-11 - 2018-12-13 %C Ahmedabad, India %B 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science %E Ganguly, Sumit; Pandya, Paritosh %P 1 - 17 %Z sequence number: 25 %I Schloss Dagstuhl %@ 978-3-95977-093-4 %B Leibniz International Proceedings in Informatics %N 122 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9924/http://drops.dagstuhl.de/doku/urheberrecht1.html
[122]
B. Ray Chaudhury and K. Mehlhorn, “Combinatorial Algorithms for General Linear Arrow-Debreu Markets,” in 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018), Ahmedabad, India, 2018.
Export
BibTeX
@inproceedings{Chaudhury_FSTTCS2018, TITLE = {Combinatorial Algorithms for General Linear {A}rrow-{D}ebreu Markets}, AUTHOR = {Ray Chaudhury, Bhaskar and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISSN = {1868-896}, ISBN = {978-3-95977-093-4}, URL = {urn:nbn:de:0030-drops-99255}, DOI = {10.4230/LIPIcs.FSTTCS.2018.26}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)}, EDITOR = {Ganguly, Sumit and Pandya, Paritosh}, PAGES = {1--16}, EID = {26}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {122}, ADDRESS = {Ahmedabad, India}, }
Endnote
%0 Conference Proceedings %A Ray Chaudhury, Bhaskar %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Combinatorial Algorithms for General Linear Arrow-Debreu Markets : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AADC-7 %R 10.4230/LIPIcs.FSTTCS.2018.26 %U urn:nbn:de:0030-drops-99255 %D 2018 %B 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science %Z date of event: 2018-12-11 - 2018-12-13 %C Ahmedabad, India %B 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science %E Ganguly, Sumit; Pandya, Paritosh %P 1 - 16 %Z sequence number: 26 %I Schloss Dagstuhl %@ 978-3-95977-093-4 %B Leibniz International Proceedings in Informatics %N 122 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2018/9925/http://drops.dagstuhl.de/doku/urheberrecht1.html
[123]
A. Schmid and J. M. Schmidt, “Computing 2-Walks in Polynomial Time,” ACM Transactions on Algorithms, vol. 14, no. 2, 2018.
Export
BibTeX
@article{Schmid2018, TITLE = {Computing 2-Walks in Polynomial Time}, AUTHOR = {Schmid, Andreas and Schmidt, Jens M.}, LANGUAGE = {eng}, ISSN = {1549-6325}, DOI = {10.1145/3183368}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {ACM Transactions on Algorithms}, VOLUME = {14}, NUMBER = {2}, EID = {22}, }
Endnote
%0 Journal Article %A Schmid, Andreas %A Schmidt, Jens M. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Computing 2-Walks in Polynomial Time : %G eng %U http://hdl.handle.net/21.11116/0000-0001-949E-6 %R 10.1145/3183368 %7 2018 %D 2018 %J ACM Transactions on Algorithms %V 14 %N 2 %Z sequence number: 22 %I ACM %C New York, NY %@ false
[124]
A. Schmid and J. M. Schmidt, “Computing Tutte Paths,” in 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), Prague, Czech Republic, 2018.
Export
BibTeX
@inproceedings{Schmid_ICALP2018, TITLE = {Computing {T}utte Paths}, AUTHOR = {Schmid, Andreas and Schmidt, Jens M.}, LANGUAGE = {eng}, ISBN = {978-3-95977-076-7}, URL = {urn:nbn:de:0030-drops-91029}, DOI = {10.4230/LIPIcs.ICALP.2018.98}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, EDITOR = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D{\'a}niel and Sannella, Donald}, PAGES = {1--14}, EID = {98}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {107}, ADDRESS = {Prague, Czech Republic}, }
Endnote
%0 Conference Proceedings %A Schmid, Andreas %A Schmidt, Jens M. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Computing Tutte Paths : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AB32-5 %R 10.4230/LIPIcs.ICALP.2018.98 %U urn:nbn:de:0030-drops-91029 %D 2018 %B 45th International Colloquium on Automata, Languages, and Programming %Z date of event: 2018-07-09 - 2018-07-13 %C Prague, Czech Republic %B 45th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Kaklamanis, Christos; Marx, D&#225;niel; Sannella, Donald %P 1 - 14 %Z sequence number: 98 %I Schloss Dagstuhl %@ 978-3-95977-076-7 %B Leibniz International Proceedings in Informatics %N 107 %U http://drops.dagstuhl.de/opus/volltexte/2018/9102/http://drops.dagstuhl.de/doku/urheberrecht1.html
[125]
A. Wiese, “Independent Set of Convex Polygons: From nϵ to 1+ϵ via Shrinking,” Algorithmica, vol. 80, no. 3, 2018.
Export
BibTeX
@article{Wiese2017, TITLE = {Independent Set of Convex Polygons: From $n^{\epsilon}$ to 1+$\epsilon$ via Shrinking}, AUTHOR = {Wiese, Andreas}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-017-0347-8}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, DATE = {2018}, JOURNAL = {Algorithmica}, VOLUME = {80}, NUMBER = {3}, PAGES = {918--934}, }
Endnote
%0 Journal Article %A Wiese, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Independent Set of Convex Polygons: From n&#1013; to 1+&#1013; via Shrinking : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-2602-4 %R 10.1007/s00453-017-0347-8 %7 2017 %D 2018 %J Algorithmica %V 80 %N 3 %& 918 %P 918 - 934 %I Springer-Verlag %C New York %@ false
2017
[126]
A. Abboud, A. Backurs, K. Bringmann, and M. Künnemann, “Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve,” in 58th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2017), Berkeley, CA, USA, 2017.
Export
BibTeX
@inproceedings{Abboud_FOCS2017, TITLE = {Fine-Grained Complexity of Analyzing Compressed Data: {Q}uantifying Improvements over Decompress-And-Solve}, AUTHOR = {Abboud, Amir and Backurs, Arturs and Bringmann, Karl and K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, ISBN = {978-1-5386-3464-6}, DOI = {10.1109/FOCS.2017.26}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {58th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2017)}, PAGES = {192--203}, ADDRESS = {Berkeley, CA, USA}, }
Endnote
%0 Conference Proceedings %A Abboud, Amir %A Backurs, Arturs %A Bringmann, Karl %A K&#252;nnemann, Marvin %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve : %G eng %U http://hdl.handle.net/21.11116/0000-0000-0475-8 %R 10.1109/FOCS.2017.26 %D 2017 %B 58th Annual IEEE Symposium on Foundations of Computer Science %Z date of event: 2017-10-15 - 2017-10-17 %C Berkeley, CA, USA %B 58th Annual IEEE Symposium on Foundations of Computer Science %P 192 - 203 %I IEEE %@ 978-1-5386-3464-6
[127]
A. Abboud, K. Bringmann, D. Hermelin, and D. Shabtay, “SETH-Based Lower Bounds for Subset Sum and Bicriteria Path,” 2017. [Online]. Available: http://arxiv.org/abs/1704.04546. (arXiv: 1704.04546)
Abstract
Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial $O^{*}(T)$-time algorithm for Subset-Sum on $n$ numbers and target $T$ cannot be improved to time $T^{1-\varepsilon}\cdot 2^{o(n)}$ for any $\varepsilon>0$, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of $N$ given instances of Subset-Sum is a YES instance requires time $(N T)^{1-o(1)}$. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with $m$ edges and edge lengths bounded by $L$, we show that the $O(Lm)$ pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to $\tilde{O}(L+m)$, in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017).
Export
BibTeX
@online{DBLP:journals/corr/AbboudBHS17, TITLE = {{SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path}, AUTHOR = {Abboud, Amir and Bringmann, Karl and Hermelin, Danny and Shabtay, Dvir}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1704.04546}, EPRINT = {1704.04546}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial $O^{*}(T)$-time algorithm for Subset-Sum on $n$ numbers and target $T$ cannot be improved to time $T^{1-\varepsilon}\cdot 2^{o(n)}$ for any $\varepsilon>0$, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of $N$ given instances of Subset-Sum is a YES instance requires time $(N T)^{1-o(1)}$. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with $m$ edges and edge lengths bounded by $L$, we show that the $O(Lm)$ pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to $\tilde{O}(L+m)$, in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017).}, }
Endnote
%0 Report %A Abboud, Amir %A Bringmann, Karl %A Hermelin, Danny %A Shabtay, Dvir %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T SETH-Based Lower Bounds for Subset Sum and Bicriteria Path : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-89E5-3 %U http://arxiv.org/abs/1704.04546 %D 2017 %X Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial $O^{*}(T)$-time algorithm for Subset-Sum on $n$ numbers and target $T$ cannot be improved to time $T^{1-\varepsilon}\cdot 2^{o(n)}$ for any $\varepsilon>0$, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of $N$ given instances of Subset-Sum is a YES instance requires time $(N T)^{1-o(1)}$. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with $m$ edges and edge lengths bounded by $L$, we show that the $O(Lm)$ pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to $\tilde{O}(L+m)$, in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017). %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Computational Complexity, cs.CC
[128]
I. Abraham, S. Chechik, and S. Krinninger, “Fully Dynamic All-Pairs Shortest Paths with Worst-Case Update-Time Revisited,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{AbrahamCK17, TITLE = {Fully Dynamic All-Pairs Shortest Paths with Worst-Case Update-Time Revisited}, AUTHOR = {Abraham, Ittai and Chechik, Shiri and Krinninger, Sebastian}, LANGUAGE = {eng}, ISBN = {978-1-61197-478-2}, DOI = {10.1137/1.9781611974782.28}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, EDITOR = {Klein, Philip N.}, PAGES = {440--452}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Abraham, Ittai %A Chechik, Shiri %A Krinninger, Sebastian %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fully Dynamic All-Pairs Shortest Paths with Worst-Case Update-Time Revisited : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-52D0-1 %R 10.1137/1.9781611974782.28 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %E Klein, Philip N. %P 440 - 452 %I SIAM %@ 978-1-61197-478-2
[129]
A. Adamaszek, M. P. Renault, A. Rosen, and R. van Stee, “Reordering Buffer Management with Advice,” Journal of Scheduling, vol. 20, no. 5, 2017.
Export
BibTeX
@article{Adamaszek2017, TITLE = {Reordering Buffer Management with Advice}, AUTHOR = {Adamaszek, Anna and Renault, Marc P. and Rosen, Adi and van Stee, Rob}, LANGUAGE = {eng}, ISSN = {1094-6136}, DOI = {10.1007/s10951-016-0487-8}, PUBLISHER = {Wiley}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Journal of Scheduling}, VOLUME = {20}, NUMBER = {5}, PAGES = {423--442}, }
Endnote
%0 Journal Article %A Adamaszek, Anna %A Renault, Marc P. %A Rosen, Adi %A van Stee, Rob %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Reordering Buffer Management with Advice : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-0E60-C %R 10.1007/s10951-016-0487-8 %7 2016-06-17 %D 2017 %J Journal of Scheduling %V 20 %N 5 %& 423 %P 423 - 442 %I Wiley %C New York, NY %@ false
[130]
N. Alon, S. Moran, and A. Yehudayoff, “Sign Rank versus Vapnik-Chervonenkis Dimension,” Sbornik: Mathematics, vol. 208, no. 12, 2017.
Export
BibTeX
@article{Alon2017, TITLE = {Sign Rank versus {V}apnik-{C}hervonenkis Dimension}, AUTHOR = {Alon, Noga and Moran, Shay and Yehudayoff, Amir}, LANGUAGE = {eng}, ISSN = {1064-5616}, DOI = {10.1070/SM8780}, PUBLISHER = {Mathematical Society, Turpion Ltd.}, ADDRESS = {London}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Sbornik: Mathematics}, VOLUME = {208}, NUMBER = {12}, PAGES = {1724--1757}, }
Endnote
%0 Journal Article %A Alon, Noga %A Moran, Shay %A Yehudayoff, Amir %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Sign Rank versus Vapnik-Chervonenkis Dimension : %G eng %U http://hdl.handle.net/21.11116/0000-0000-C8D2-1 %R 10.1070/SM8780 %7 2017 %D 2017 %J Sbornik: Mathematics %O Sb. Math. %V 208 %N 12 %& 1724 %P 1724 - 1757 %I Mathematical Society, Turpion Ltd. %C London %@ false
[131]
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann, and B. Wirtz, “Verification of Linear Hybrid Systems with Large Discrete State Spaces Using Counterexample-guided Abstraction Refinement,” Science of Computer Programming, vol. 148, 2017.
Export
BibTeX
@article{Althaus2017, TITLE = {Verification of Linear Hybrid Systems with Large Discrete State Spaces Using Counterexample-guided Abstraction Refinement}, AUTHOR = {Althaus, Ernst and Beber, Bj{\"o}rn and Damm, Werner and Disch, Stefan and Hagemann, Willem and Rakow, Astrid and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris}, LANGUAGE = {eng}, ISSN = {0167-6423}, DOI = {10.1016/j.scico.2017.04.010}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Science of Computer Programming}, VOLUME = {148}, PAGES = {123--160}, }
Endnote
%0 Journal Article %A Althaus, Ernst %A Beber, Bj&#246;rn %A Damm, Werner %A Disch, Stefan %A Hagemann, Willem %A Rakow, Astrid %A Scholl, Christoph %A Waldmann, Uwe %A Wirtz, Boris %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Verification of Linear Hybrid Systems with Large Discrete State Spaces Using Counterexample-guided Abstraction Refinement : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-1C23-5 %R 10.1016/j.scico.2017.04.010 %7 2017-05-10 %D 2017 %J Science of Computer Programming %V 148 %& 123 %P 123 - 160 %I Elsevier %C Amsterdam %@ false
[132]
S. Anand, K. Bringmann, T. Friedrich, N. Garg, and A. Kumar, “Minimizing Maximum (Weighted) Flow-Time on Related and Unrelated Machines,” Algorithmica, vol. 77, no. 2, 2017.
Export
BibTeX
@article{DBLP:journals/algorithmica/0002B0G017, TITLE = {Minimizing Maximum (Weighted) Flow-Time on Related and Unrelated Machines}, AUTHOR = {Anand, S. and Bringmann, Karl and Friedrich, Tobias and Garg, Naveen and Kumar, Amit}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-015-0082-y}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Algorithmica}, VOLUME = {77}, NUMBER = {2}, PAGES = {515--536}, }
Endnote
%0 Journal Article %A Anand, S. %A Bringmann, Karl %A Friedrich, Tobias %A Garg, Naveen %A Kumar, Amit %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Discrete Optimization, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Minimizing Maximum (Weighted) Flow-Time on Related and Unrelated Machines : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5527-9 %R 10.1007/s00453-015-0082-y %7 2015 %D 2017 %J Algorithmica %V 77 %N 2 %& 515 %P 515 - 536 %I Springer-Verlag %C New York, NY %@ false
[133]
A. Antoniadis, P. Kling, S. Ott, and S. Riechers, “Continuous Speed Scaling with Variability: A simple and Direct Approach,” Theoretical Computer Science, vol. 678, 2017.
Export
BibTeX
@article{Antoniadis2017, TITLE = {Continuous Speed Scaling with Variability: {A} simple and Direct Approach}, AUTHOR = {Antoniadis, Antonios and Kling, Peter and Ott, Sebastian and Riechers, S{\"o}ren}, LANGUAGE = {eng}, ISSN = {0304-3975}, DOI = {10.1016/j.tcs.2017.03.021}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Theoretical Computer Science}, VOLUME = {678}, PAGES = {1--13}, }
Endnote
%0 Journal Article %A Antoniadis, Antonios %A Kling, Peter %A Ott, Sebastian %A Riechers, S&#246;ren %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Continuous Speed Scaling with Variability: A simple and Direct Approach : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-7857-F %R 10.1016/j.tcs.2017.03.021 %7 2017 %D 2017 %J Theoretical Computer Science %V 678 %& 1 %P 1 - 13 %I Elsevier %C Amsterdam %@ false
[134]
A. Antoniadis, N. Barcelo, M. Consuegra, P. Kling, M. Nugent, K. Pruhs, and M. Scquizzato, “Efficient Computation of Optimal Energy and Fractional Weighted Flow Trade-Off Schedules,” Algorithmica, vol. 79, no. 2, 2017.
Export
BibTeX
@article{Antoniadis2016, TITLE = {Efficient Computation of Optimal Energy and Fractional Weighted Flow Trade-Off Schedules}, AUTHOR = {Antoniadis, Antonios and Barcelo, Neal and Consuegra, Mario and Kling, Peter and Nugent, Michael and Pruhs, Kirk and Scquizzato, Michele}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-016-0208-x}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Algorithmica}, VOLUME = {79}, NUMBER = {2}, PAGES = {568--597}, }
Endnote
%0 Journal Article %A Antoniadis, Antonios %A Barcelo, Neal %A Consuegra, Mario %A Kling, Peter %A Nugent, Michael %A Pruhs, Kirk %A Scquizzato, Michele %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations %T Efficient Computation of Optimal Energy and Fractional Weighted Flow Trade-Off Schedules : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-58AA-7 %R 10.1007/s00453-016-0208-x %7 2016-08-31 %D 2017 %J Algorithmica %V 79 %N 2 %& 568 %P 568 - 597 %I Springer %C New York, NY %@ false
[135]
Y. Azar, M. Hoefer, I. Maor, R. Reiffenhäuser, and B. Vöcking, “Truthful Mechanism Design via Correlated Tree Rounding,” Mathematical Programming / A, vol. 163, no. 1–2, 2017.
Export
BibTeX
@article{Azar2017, TITLE = {Truthful Mechanism Design via Correlated Tree Rounding}, AUTHOR = {Azar, Yossi and Hoefer, Martin and Maor, Idan and Reiffenh{\"a}user, Rebecca and V{\"o}cking, Berthold}, LANGUAGE = {eng}, ISSN = {0025-5610}, DOI = {10.1007/s10107-016-1068-5}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Mathematical Programming / A}, VOLUME = {163}, NUMBER = {1-2}, PAGES = {445--469}, }
Endnote
%0 Journal Article %A Azar, Yossi %A Hoefer, Martin %A Maor, Idan %A Reiffenh&#228;user, Rebecca %A V&#246;cking, Berthold %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Truthful Mechanism Design via Correlated Tree Rounding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-326B-A %R 10.1007/s10107-016-1068-5 %7 2016-09-10 %D 2017 %J Mathematical Programming / A %V 163 %N 1-2 %& 445 %P 445 - 469 %I Springer %C New York, NY %@ false
[136]
J. Baldus and K. Bringmann, “A Fast Implementation of Near Neighbors Queries for Fréchet Distance (GIS Cup),” in 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS 2017), Redondo Beach, CA, USA, 2017.
Export
BibTeX
@inproceedings{Baldus_SIGSPATIAL2017, TITLE = {A Fast Implementation of Near Neighbors Queries for {F}r\'{e}chet Distance ({GIS Cup})}, AUTHOR = {Baldus, Julian and Bringmann, Karl}, LANGUAGE = {eng}, ISBN = {978-1-4503-5490-5}, DOI = {10.1145/3139958.3140062}, PUBLISHER = {ACM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS 2017)}, EDITOR = {Hoel, Erik and Newsman, Shawn and Ravada, Siva and Tamassia, Roberto and Trjacevski, Goce}, EID = {99}, ADDRESS = {Redondo Beach, CA, USA}, }
Endnote
%0 Conference Proceedings %A Baldus, Julian %A Bringmann, Karl %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Fast Implementation of Near Neighbors Queries for Fr&#233;chet Distance (GIS Cup) : %G eng %U http://hdl.handle.net/21.11116/0000-0001-3E17-1 %R 10.1145/3139958.3140062 %D 2017 %B 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems %Z date of event: 2017-11-07 - 2017-11-10 %C Redondo Beach, CA, USA %B 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems %E Hoel, Erik; Newsman, Shawn; Ravada, Siva; Tamassia, Roberto; Trjacevski, Goce %Z sequence number: 99 %I ACM %@ 978-1-4503-5490-5
[137]
L. Becchetti, A. Clementi, E. Natale, F. Pasquale, and L. Trevisan, “Find Your Place: Simple Distributed Algorithms for Community Detection,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{BCNPT17, TITLE = {Find Your Place: {S}imple Distributed Algorithms for Community Detection}, AUTHOR = {Becchetti, Luca and Clementi, Andrea and Natale, Emanuele and Pasquale, Francesco and Trevisan, Luca}, LANGUAGE = {eng}, DOI = {10.1137/1.9781611974782.59}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, PAGES = {940--959}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Becchetti, Luca %A Clementi, Andrea %A Natale, Emanuele %A Pasquale, Francesco %A Trevisan, Luca %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Find Your Place: Simple Distributed Algorithms for Community Detection : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5877-A %R 10.1137/1.9781611974782.59 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %P 940 - 959 %I SIAM
[138]
L. Becchetti, A. Clementi, E. Natale, F. Pasquale, R. Silvestri, and L. Trevisan, “Simple Dynamics for Plurality Consensus,” Distributed Computing, vol. 30, no. 4, 2017.
Export
BibTeX
@article{Becchetti2017, TITLE = {Simple Dynamics for Plurality Consensus}, AUTHOR = {Becchetti, Luca and Clementi, Andrea and Natale, Emanuele and Pasquale, Francesco and Silvestri, Riccardo and Trevisan, Luca}, LANGUAGE = {eng}, ISSN = {0178-2770}, DOI = {10.1007/s00446-016-0289-4}, PUBLISHER = {Springer International}, ADDRESS = {Berlin}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Distributed Computing}, VOLUME = {30}, NUMBER = {4}, PAGES = {293--306}, }
Endnote
%0 Journal Article %A Becchetti, Luca %A Clementi, Andrea %A Natale, Emanuele %A Pasquale, Francesco %A Silvestri, Riccardo %A Trevisan, Luca %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Simple Dynamics for Plurality Consensus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-F885-F %R 10.1007/s00446-016-0289-4 %7 2016-11-22 %D 2017 %J Distributed Computing %V 30 %N 4 %& 293 %P 293 - 306 %I Springer International %C Berlin %@ false
[139]
R. Becker, A. Karrenbauer, S. Krinninger, and C. Lenzen, “Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models,” in 31st International Symposium on Distributed Computing (DISC 2017), Vienna, Austria, 2017.
Export
BibTeX
@inproceedings{Becker_DISC17, TITLE = {Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models}, AUTHOR = {Becker, Ruben and Karrenbauer, Andreas and Krinninger, Sebastian and Lenzen, Christoph}, LANGUAGE = {eng}, ISBN = {978-3-95977-053-8}, URL = {urn:nbn:de:0030-drops-80031}, DOI = {10.4230/LIPIcs.DISC.2017.7}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {31st International Symposium on Distributed Computing (DISC 2017)}, EDITOR = {Richa, Andr{\'e}a W.}, PAGES = {1--16}, EID = {7}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {91}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A Becker, Ruben %A Karrenbauer, Andreas %A Krinninger, Sebastian %A Lenzen, Christoph %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F19-F %U urn:nbn:de:0030-drops-80031 %R 10.4230/LIPIcs.DISC.2017.7 %D 2017 %B 31st International Symposium on Distributed Computing %Z date of event: 2017-10-16 - 2017-10-20 %C Vienna, Austria %B 31st International Symposium on Distributed Computing %E Richa, Andr&#233;a W. %P 1 - 16 %Z sequence number: 7 %I Schloss Dagstuhl %@ 978-3-95977-053-8 %B Leibniz International Proceedings in Informatics %N 91 %U http://drops.dagstuhl.de/opus/volltexte/2017/8003/http://drops.dagstuhl.de/doku/urheberrecht1.html
[140]
R. Becker, V. Bonifaci, A. Karrenbauer, P. Kolev, and K. Mehlhorn, “Two Results on Slime Mold Computations,” 2017. [Online]. Available: http://arxiv.org/abs/1707.06631. (arXiv: 1707.06631)
Abstract
In this paper, we present two results on slime mold computations. The first one treats a biologically-grounded model, originally proposed by biologists analyzing the behavior of the slime mold Physarum polycephalum. This primitive organism was empirically shown by Nakagaki et al. to solve shortest path problems in wet-lab experiments (Nature'00). We show that the proposed simple mathematical model actually generalizes to a much wider class of problems, namely undirected linear programs with a non-negative cost vector. For our second result, we consider the discretization of a biologically-inspired model. This model is a directed variant of the biologically-grounded one and was never claimed to describe the behavior of a biological system. Straszak and Vishnoi showed that it can $\epsilon$-approximately solve flow problems (SODA'16) and even general linear programs with positive cost vector (ITCS'16) within a finite number of steps. We give a refined convergence analysis that improves the dependence on $\epsilon$ from polynomial to logarithmic and simultaneously allows to choose a step size that is independent of $\epsilon$. Furthermore, we show that the dynamics can be initialized with a more general set of (infeasible) starting points.
Export
BibTeX
@online{Becker_arxiv2017, TITLE = {Two Results on Slime Mold Computations}, AUTHOR = {Becker, Ruben and Bonifaci, Vincenzo and Karrenbauer, Andreas and Kolev, Pavel and Mehlhorn, Kurt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1707.06631}, EPRINT = {1707.06631}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In this paper, we present two results on slime mold computations. The first one treats a biologically-grounded model, originally proposed by biologists analyzing the behavior of the slime mold Physarum polycephalum. This primitive organism was empirically shown by Nakagaki et al. to solve shortest path problems in wet-lab experiments (Nature'00). We show that the proposed simple mathematical model actually generalizes to a much wider class of problems, namely undirected linear programs with a non-negative cost vector. For our second result, we consider the discretization of a biologically-inspired model. This model is a directed variant of the biologically-grounded one and was never claimed to describe the behavior of a biological system. Straszak and Vishnoi showed that it can $\epsilon$-approximately solve flow problems (SODA'16) and even general linear programs with positive cost vector (ITCS'16) within a finite number of steps. We give a refined convergence analysis that improves the dependence on $\epsilon$ from polynomial to logarithmic and simultaneously allows to choose a step size that is independent of $\epsilon$. Furthermore, we show that the dynamics can be initialized with a more general set of (infeasible) starting points.}, }
Endnote
%0 Report %A Becker, Ruben %A Bonifaci, Vincenzo %A Karrenbauer, Andreas %A Kolev, Pavel %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Two Results on Slime Mold Computations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-FBA8-F %U http://arxiv.org/abs/1707.06631 %D 2017 %X In this paper, we present two results on slime mold computations. The first one treats a biologically-grounded model, originally proposed by biologists analyzing the behavior of the slime mold Physarum polycephalum. This primitive organism was empirically shown by Nakagaki et al. to solve shortest path problems in wet-lab experiments (Nature'00). We show that the proposed simple mathematical model actually generalizes to a much wider class of problems, namely undirected linear programs with a non-negative cost vector. For our second result, we consider the discretization of a biologically-inspired model. This model is a directed variant of the biologically-grounded one and was never claimed to describe the behavior of a biological system. Straszak and Vishnoi showed that it can $\epsilon$-approximately solve flow problems (SODA'16) and even general linear programs with positive cost vector (ITCS'16) within a finite number of steps. We give a refined convergence analysis that improves the dependence on $\epsilon$ from polynomial to logarithmic and simultaneously allows to choose a step size that is independent of $\epsilon$. Furthermore, we show that the dynamics can be initialized with a more general set of (infeasible) starting points. %K Computer Science, Data Structures and Algorithms, cs.DS,Mathematics, Dynamical Systems, math.DS,Mathematics, Optimization and Control, math.OC, Physics, Biological Physics, physics.bio-ph
[141]
R. Becker and M. Sagraloff, “Counting Solutions of a Polynomial System Locally and Exactly,” 2017. [Online]. Available: http://arxiv.org/abs/1712.05487. (arXiv: 1712.05487)
Abstract
We propose a symbolic-numeric algorithm to count the number of solutions of a polynomial system within a local region. More specifically, given a zero-dimensional system $f_1=\cdots=f_n=0$, with $f_i\in\mathbb{C}[x_1,\ldots,x_n]$, and a polydisc $\mathbf{\Delta}\subset\mathbb{C}^n$, our method aims to certify the existence of $k$ solutions (counted with multiplicity) within the polydisc. In case of success, it yields the correct result under guarantee. Otherwise, no information is given. However, we show that our algorithm always succeeds if $\mathbf{\Delta}$ is sufficiently small and well-isolating for a $k$-fold solution $\mathbf{z}$ of the system. Our analysis of the algorithm further yields a bound on the size of the polydisc for which our algorithm succeeds under guarantee. This bound depends on local parameters such as the size and multiplicity of $\mathbf{z}$ as well as the distances between $\mathbf{z}$ and all other solutions. Efficiency of our method stems from the fact that we reduce the problem of counting the roots in $\mathbf{\Delta}$ of the original system to the problem of solving a truncated system of degree $k$. In particular, if the multiplicity $k$ of $\mathbf{z}$ is small compared to the total degrees of the polynomials $f_i$, our method considerably improves upon known complete and certified methods. For the special case of a bivariate system, we report on an implementation of our algorithm, and show experimentally that our algorithm leads to a significant improvement, when integrated as inclusion predicate into an elimination method.
Export
BibTeX
@online{Becker_arXiv1712.05487, TITLE = {Counting Solutions of a Polynomial System Locally and Exactly}, AUTHOR = {Becker, Ruben and Sagraloff, Michael}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1712.05487}, EPRINT = {1712.05487}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We propose a symbolic-numeric algorithm to count the number of solutions of a polynomial system within a local region. More specifically, given a zero-dimensional system $f_1=\cdots=f_n=0$, with $f_i\in\mathbb{C}[x_1,\ldots,x_n]$, and a polydisc $\mathbf{\Delta}\subset\mathbb{C}^n$, our method aims to certify the existence of $k$ solutions (counted with multiplicity) within the polydisc. In case of success, it yields the correct result under guarantee. Otherwise, no information is given. However, we show that our algorithm always succeeds if $\mathbf{\Delta}$ is sufficiently small and well-isolating for a $k$-fold solution $\mathbf{z}$ of the system. Our analysis of the algorithm further yields a bound on the size of the polydisc for which our algorithm succeeds under guarantee. This bound depends on local parameters such as the size and multiplicity of $\mathbf{z}$ as well as the distances between $\mathbf{z}$ and all other solutions. Efficiency of our method stems from the fact that we reduce the problem of counting the roots in $\mathbf{\Delta}$ of the original system to the problem of solving a truncated system of degree $k$. In particular, if the multiplicity $k$ of $\mathbf{z}$ is small compared to the total degrees of the polynomials $f_i$, our method considerably improves upon known complete and certified methods. For the special case of a bivariate system, we report on an implementation of our algorithm, and show experimentally that our algorithm leads to a significant improvement, when integrated as inclusion predicate into an elimination method.}, }
Endnote
%0 Report %A Becker, Ruben %A Sagraloff, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Counting Solutions of a Polynomial System Locally and Exactly : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AB99-1 %U http://arxiv.org/abs/1712.05487 %D 2017 %X We propose a symbolic-numeric algorithm to count the number of solutions of a polynomial system within a local region. More specifically, given a zero-dimensional system $f_1=\cdots=f_n=0$, with $f_i\in\mathbb{C}[x_1,\ldots,x_n]$, and a polydisc $\mathbf{\Delta}\subset\mathbb{C}^n$, our method aims to certify the existence of $k$ solutions (counted with multiplicity) within the polydisc. In case of success, it yields the correct result under guarantee. Otherwise, no information is given. However, we show that our algorithm always succeeds if $\mathbf{\Delta}$ is sufficiently small and well-isolating for a $k$-fold solution $\mathbf{z}$ of the system. Our analysis of the algorithm further yields a bound on the size of the polydisc for which our algorithm succeeds under guarantee. This bound depends on local parameters such as the size and multiplicity of $\mathbf{z}$ as well as the distances between $\mathbf{z}$ and all other solutions. Efficiency of our method stems from the fact that we reduce the problem of counting the roots in $\mathbf{\Delta}$ of the original system to the problem of solving a truncated system of degree $k$. In particular, if the multiplicity $k$ of $\mathbf{z}$ is small compared to the total degrees of the polynomials $f_i$, our method considerably improves upon known complete and certified methods. For the special case of a bivariate system, we report on an implementation of our algorithm, and show experimentally that our algorithm leads to a significant improvement, when integrated as inclusion predicate into an elimination method. %K Computer Science, Symbolic Computation, cs.SC,Computer Science, Numerical Analysis, cs.NA,Mathematics, Numerical Analysis, math.NA
[142]
X. Bei, J. Garg, M. Hoefer, and K. Mehlhorn, “Earning Limits in Fisher Markets with Spending-Constraint Utilities,” in Algorithmic Game Theory (SAGT 2017), L’Aquila, Italy, 2017.
Export
BibTeX
@inproceedings{BeiSAGT2017, TITLE = {Earning Limits in {Fisher} Markets with Spending-Constraint Utilities}, AUTHOR = {Bei, Xiaohui and Garg, Jugal and Hoefer, Martin and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISBN = {978-3-319-66699-0}, DOI = {10.1007/978-3-319-66700-3_6}, PUBLISHER = {Springer}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Algorithmic Game Theory (SAGT 2017)}, PAGES = {67--79}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10504}, ADDRESS = {L'Aquila, Italy}, }
Endnote
%0 Conference Proceedings %A Bei, Xiaohui %A Garg, Jugal %A Hoefer, Martin %A Mehlhorn, Kurt %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Earning Limits in Fisher Markets with Spending-Constraint Utilities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-E7DB-7 %R 10.1007/978-3-319-66700-3_6 %D 2017 %B 10th International Symposium on Algorithmic Game Theory %Z date of event: 2017-09-12 - 2017-09-14 %C L'Aquila, Italy %B Algorithmic Game Theory %P 67 - 79 %I Springer %@ 978-3-319-66699-0 %B Lecture Notes in Computer Science %N 10504
[143]
F. Benhamouda, T. Lepoint, C. Mathieu, and H. Zhou, “Optimization of Bootstrapping in Circuits,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{doi:10.1137/1.9781611974782.160, TITLE = {Optimization of Bootstrapping in Circuits}, AUTHOR = {Benhamouda, Fabrice and Lepoint, Tancr{\`e}de and Mathieu, Claire and Zhou, Hang}, LANGUAGE = {eng}, ISBN = {978-1-61197-478-2}, DOI = {10.1137/1.9781611974782.160}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, EDITOR = {Klein, Philip N.}, PAGES = {2423--2433}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Benhamouda, Fabrice %A Lepoint, Tancr&#232;de %A Mathieu, Claire %A Zhou, Hang %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Optimization of Bootstrapping in Circuits : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4EBE-A %R 10.1137/1.9781611974782.160 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %E Klein, Philip N. %P 2423 - 2433 %I SIAM %@ 978-1-61197-478-2
[144]
P. Berenbrink, A. Clementi, R. Elsässer, P. Kling, F. Mallmann-Trenn, and E. Natale, “Ignore or Comply?: On Breaking Symmetry in Consensus,” in PODC’17, ACM Symposium on Principles of Distributed Computing, Washington, DC, USA, 2017.
Export
BibTeX
@inproceedings{Berenbrink:2017:ICB:3087801.3087817, TITLE = {Ignore or Comply?: {O}n Breaking Symmetry in Consensus}, AUTHOR = {Berenbrink, Petra and Clementi, Andrea and Els{\"a}sser, Robert and Kling, Peter and Mallmann-Trenn, Frederik and Natale, Emanuele}, LANGUAGE = {eng}, ISBN = {978-1-4503-4992-5}, DOI = {10.1145/3087801.3087817}, PUBLISHER = {ACM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {PODC'17, ACM Symposium on Principles of Distributed Computing}, PAGES = {335--344}, ADDRESS = {Washington, DC, USA}, }
Endnote
%0 Conference Proceedings %A Berenbrink, Petra %A Clementi, Andrea %A Els&#228;sser, Robert %A Kling, Peter %A Mallmann-Trenn, Frederik %A Natale, Emanuele %+ External Organizations External Organizations External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Ignore or Comply?: On Breaking Symmetry in Consensus : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-76B2-0 %R 10.1145/3087801.3087817 %D 2017 %B ACM Symposium on Principles of Distributed Computing %Z date of event: 2017-07-25 - 2017-07-27 %C Washington, DC, USA %B PODC'17 %P 335 - 344 %I ACM %@ 978-1-4503-4992-5
[145]
O. Beyersdorff, L. Chew, and K. Sreenivasaiah, “A Game Characterisation of Tree-like Q-Resolution Size,” Journal of Computer and System Sciences, vol. In Press, 2017.
Export
BibTeX
@article{Beyersdorff2017, TITLE = {A Game Characterisation of Tree-like {Q-Resolution} Size}, AUTHOR = {Beyersdorff, Olaf and Chew, Leroy and Sreenivasaiah, Karteek}, LANGUAGE = {eng}, ISSN = {0022-0000}, DOI = {10.1016/j.jcss.2016.11.011}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, JOURNAL = {Journal of Computer and System Sciences}, VOLUME = {In Press}, }
Endnote
%0 Journal Article %A Beyersdorff, Olaf %A Chew, Leroy %A Sreenivasaiah, Karteek %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Game Characterisation of Tree-like Q-Resolution Size : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5F80-F %R 10.1016/j.jcss.2016.11.011 %7 2017 %D 2017 %J Journal of Computer and System Sciences %V In Press %I Elsevier %C Amsterdam %@ false
[146]
L. Boczkowski, A. Korman, and E. Natale, “Minimizing Message Size in Stochastic Communication Patterns: Fast Self-Stabilizing Protocols with 3 bits,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{BKN17, TITLE = {Minimizing Message Size in Stochastic Communication Patterns: {F}ast Self-Stabilizing Protocols with 3 bits}, AUTHOR = {Boczkowski, Lucas and Korman, Amos and Natale, Emanuele}, LANGUAGE = {eng}, DOI = {10.1137/1.9781611974782.168}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, PAGES = {2540--2559}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Boczkowski, Lucas %A Korman, Amos %A Natale, Emanuele %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Minimizing Message Size in Stochastic Communication Patterns: Fast Self-Stabilizing Protocols with 3 bits : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-587B-2 %R 10.1137/1.9781611974782.168 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %P 2540 - 2559 %I SIAM
[147]
N. Boucquey and A. Kinali, “Damped Sine Based Time Interval Counter,” in Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017), Besançon, France, 2017.
Export
BibTeX
@inproceedings{Boucquey_EFTF/IFCS2017b, TITLE = {Damped Sine Based Time Interval Counter}, AUTHOR = {Boucquey, Nicolas and Kinali, Attila}, LANGUAGE = {eng}, ISBN = {978-1-5386-2916-1}, DOI = {10.1109/FCS.2017.8088819}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017)}, PAGES = {121--123}, ADDRESS = {Besan{\c c}on, France}, }
Endnote
%0 Conference Proceedings %A Boucquey, Nicolas %A Kinali, Attila %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Damped Sine Based Time Interval Counter : %G eng %U http://hdl.handle.net/21.11116/0000-0001-94B8-8 %R 10.1109/FCS.2017.8088819 %D 2017 %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %Z date of event: 2017-07-09 - 2017-07-13 %C Besan&#231;on, France %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %P 121 - 123 %I IEEE %@ 978-1-5386-2916-1
[148]
N. Boucquey and A. Kinali, “Software Defined Radio Platform for Time and Frequency Metrology,” in Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017), Besançon, France, 2017.
Export
BibTeX
@inproceedings{Boucquey_EFTF/IFCS2017, TITLE = {Software Defined Radio Platform for Time and Frequency Metrology}, AUTHOR = {Boucquey, Nicolas and Kinali, Attila}, LANGUAGE = {eng}, ISBN = {978-1-5386-2916-1}, DOI = {10.1109/FCS.2017.8088968}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017)}, PAGES = {598--599}, ADDRESS = {Besan{\c c}on, France}, }
Endnote
%0 Conference Proceedings %A Boucquey, Nicolas %A Kinali, Attila %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Software Defined Radio Platform for Time and Frequency Metrology : %G eng %U http://hdl.handle.net/21.11116/0000-0001-94B6-A %R 10.1109/FCS.2017.8088968 %D 2017 %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %Z date of event: 2017-07-09 - 2017-07-13 %C Besan&#231;on, France %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %P 598 - 599 %I IEEE %@ 978-1-5386-2916-1
[149]
K. Bringmann, A. Gronlund, and K. G. Larsen, “A Dichotomy for Regular Expression Membership Testing,” in 58th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2017), Berkeley, CA, USA, 2017.
Export
BibTeX
@inproceedings{Bringman_FOCS2017, TITLE = {A Dichotomy for Regular Expression Membership Testing}, AUTHOR = {Bringmann, Karl and Gronlund, Allan and Larsen, Kasper Green}, LANGUAGE = {eng}, ISBN = {978-1-5386-3464-6}, DOI = {10.1109/FOCS.2017.36}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {58th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2017)}, PAGES = {307--318}, ADDRESS = {Berkeley, CA, USA}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Gronlund, Allan %A Larsen, Kasper Green %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Dichotomy for Regular Expression Membership Testing : %G eng %U http://hdl.handle.net/21.11116/0000-0000-0471-C %R 10.1109/FOCS.2017.36 %D 2017 %B 58th Annual IEEE Symposium on Foundations of Computer Science %Z date of event: 2017-10-15 - 2017-10-17 %C Berkeley, CA, USA %B 58th Annual IEEE Symposium on Foundations of Computer Science %P 307 - 318 %I IEEE %@ 978-1-5386-3464-6
[150]
K. Bringmann, “A Near-Linear Pseudopolynomial Time Algorithm for Subset Sum,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{DBLP:conf/soda/Bringmann17, TITLE = {A Near-Linear Pseudopolynomial Time Algorithm for Subset Sum}, AUTHOR = {Bringmann, Karl}, LANGUAGE = {eng}, DOI = {10.1137/1.9781611974782.69}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, PAGES = {1073--1084}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Near-Linear Pseudopolynomial Time Algorithm for Subset Sum : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5522-4 %R 10.1137/1.9781611974782.69 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %P 1073 - 1084 %I SIAM
[151]
K. Bringmann and P. Wellnitz, “Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars,” in 28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017), Warsaw, Poland, 2017.
Export
BibTeX
@inproceedings{BringmannCPM2017, TITLE = {Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars}, AUTHOR = {Bringmann, Karl and Wellnitz, Philip}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-039-2}, URL = {urn:nbn:de:0030-drops-73329}, DOI = {10.4230/LIPIcs.CPM.2017.12}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017)}, EDITOR = {K{\"a}rkk{\"a}inen, Juha and Radoszweski, Jakub and Rytter, Wojciech}, PAGES = {1--14}, EID = {12}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {78}, ADDRESS = {Warsaw, Poland}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Wellnitz, Philip %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Clique-Based Lower Bounds for Parsing Tree-Adjoining Grammars : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-FB92-0 %R 10.4230/LIPIcs.CPM.2017.12 %U urn:nbn:de:0030-drops-73329 %D 2017 %B 28th Annual Symposium on Combinatorial Pattern Matching %Z date of event: 2017-07-04 - 2017-07-06 %C Warsaw, Poland %B 28th Annual Symposium on Combinatorial Pattern Matching %E K&#228;rkk&#228;inen, Juha; Radoszweski, Jakub; Rytter, Wojciech %P 1 - 14 %Z sequence number: 12 %I Schloss Dagstuhl %@ 978-3-95977-039-2 %B Leibniz International Proceedings in Informatics %N 78 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2017/7332/http://drops.dagstuhl.de/doku/urheberrecht1.html
[152]
K. Bringmann and K. Panagiotou, “Efficient Sampling Methods for Discrete Distributions,” Algorithmica, vol. 79, no. 2, 2017.
Export
BibTeX
@article{BringmannAlgorithmica2016, TITLE = {Efficient Sampling Methods for Discrete Distributions}, AUTHOR = {Bringmann, Karl and Panagiotou, Konstantinos}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-016-0205-0}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Algorithmica}, VOLUME = {79}, NUMBER = {2}, PAGES = {484--508}, }
Endnote
%0 Journal Article %A Bringmann, Karl %A Panagiotou, Konstantinos %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Efficient Sampling Methods for Discrete Distributions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-85D0-8 %R 10.1007/s00453-016-0205-0 %7 2016-08-29 %D 2017 %J Algorithmica %V 79 %N 2 %& 484 %P 484 - 508 %I Springer-Verlag %C New York %@ false
[153]
K. Bringmann, S. Cabello, and M. Emmerich, “Maximum Volume Subset Selection for Anchored Boxes,” in 33rd International Symposium on Computational Geometry (SoCG 2017), Brisbane, Australia, 2017.
Export
BibTeX
@inproceedings{bringmann:scg, TITLE = {Maximum Volume Subset Selection for Anchored Boxes}, AUTHOR = {Bringmann, Karl and Cabello, Sergio and Emmerich, Michael}, LANGUAGE = {eng}, ISBN = {978-3-95977-038-5}, URL = {urn:nbn:de:0030-drops-72011}, DOI = {10.4230/LIPIcs.SoCG.2017.22}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {33rd International Symposium on Computational Geometry (SoCG 2017)}, EDITOR = {Aranov, Boris and Katz, Matthew J.}, PAGES = {1--15}, EID = {22}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {77}, ADDRESS = {Brisbane, Australia}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Cabello, Sergio %A Emmerich, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Maximum Volume Subset Selection for Anchored Boxes : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-7D6F-9 %U urn:nbn:de:0030-drops-72011 %R 10.4230/LIPIcs.SoCG.2017.22 %D 2017 %B 33rd International Symposium on Computational Geometry %Z date of event: 2017-07-04 - 2017-07-07 %C Brisbane, Australia %B 33rd International Symposium on Computational Geometry %E Aranov, Boris; Katz, Matthew J. %P 1 - 15 %Z sequence number: 22 %I Schloss Dagstuhl %@ 978-3-95977-038-5 %B Leibniz International Proceedings in Informatics %N 77 %U http://drops.dagstuhl.de/opus/volltexte/2017/7201/http://drops.dagstuhl.de/doku/urheberrecht1.html
[154]
K. Bringmann, R. Keusch, and J. Lengler, “Sampling Geometric Inhomogeneous Random Graphs in Linear Time,” in 25th Annual European Symposium on Algorithms (ESA 2017), Vienna, Austria, 2017.
Export
BibTeX
@inproceedings{BringmannESA2017, TITLE = {Sampling Geometric Inhomogeneous Random Graphs in Linear Time}, AUTHOR = {Bringmann, Karl and Keusch, Ralph and Lengler, Johannes}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-049-1}, URL = {urn:nbn:de:0030-drops-78396}, DOI = {10.4230/LIPIcs.ESA.2017.20}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {25th Annual European Symposium on Algorithms (ESA 2017)}, EDITOR = {Pruhs, Kirk and Sohler, Christian}, PAGES = {1--15}, EID = {20}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {87}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Keusch, Ralph %A Lengler, Johannes %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Sampling Geometric Inhomogeneous Random Graphs in Linear Time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-FB87-A %R 10.4230/LIPIcs.ESA.2017.20 %U urn:nbn:de:0030-drops-78396 %D 2017 %B 25th Annual European Symposium on Algorithms %Z date of event: 2017-09-04 - 2017-09-06 %C Vienna, Austria %B 25th Annual European Symposium on Algorithms %E Pruhs, Kirk; Sohler, Christian %P 1 - 15 %Z sequence number: 20 %I Schloss Dagstuhl %@ 978-3-95977-049-1 %B Leibniz International Proceedings in Informatics %N 87 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2017/7839/http://drops.dagstuhl.de/doku/urheberrecht1.html
[155]
K. Bringmann, P. Gawrychowski, S. Mozes, and O. Weimann, “Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can),” 2017. [Online]. Available: http://arxiv.org/abs/1703.08940. (arXiv: 1703.08940)
Abstract
The edit distance between two rooted ordered trees with $n$ nodes labeled from an alphabet~$\Sigma$ is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. Tree edit distance is a well known generalization of string edit distance. The fastest known algorithm for tree edit distance runs in cubic $O(n^3)$ time and is based on a similar dynamic programming solution as string edit distance. In this paper we show that a truly subcubic $O(n^{3-\varepsilon})$ time algorithm for tree edit distance is unlikely: For $|\Sigma| = \Omega(n)$, a truly subcubic algorithm for tree edit distance implies a truly subcubic algorithm for the all pairs shortest paths problem. For $|\Sigma| = O(1)$, a truly subcubic algorithm for tree edit distance implies an $O(n^{k-\varepsilon})$ algorithm for finding a maximum weight $k$-clique. Thus, while in terms of upper bounds string edit distance and tree edit distance are highly related, in terms of lower bounds string edit distance exhibits the hardness of the strong exponential time hypothesis [Backurs, Indyk STOC'15] whereas tree edit distance exhibits the hardness of all pairs shortest paths. Our result provides a matching conditional lower bound for one of the last remaining classic dynamic programming problems.
Export
BibTeX
@online{DBLP:journals/corr/BringmannGMW17, TITLE = {Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless {APSP} can)}, AUTHOR = {Bringmann, Karl and Gawrychowski, Pawe{\l} and Mozes, Shay and Weimann, Oren}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1703.08940}, EPRINT = {1703.08940}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The edit distance between two rooted ordered trees with $n$ nodes labeled from an alphabet~$\Sigma$ is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. Tree edit distance is a well known generalization of string edit distance. The fastest known algorithm for tree edit distance runs in cubic $O(n^3)$ time and is based on a similar dynamic programming solution as string edit distance. In this paper we show that a truly subcubic $O(n^{3-\varepsilon})$ time algorithm for tree edit distance is unlikely: For $|\Sigma| = \Omega(n)$, a truly subcubic algorithm for tree edit distance implies a truly subcubic algorithm for the all pairs shortest paths problem. For $|\Sigma| = O(1)$, a truly subcubic algorithm for tree edit distance implies an $O(n^{k-\varepsilon})$ algorithm for finding a maximum weight $k$-clique. Thus, while in terms of upper bounds string edit distance and tree edit distance are highly related, in terms of lower bounds string edit distance exhibits the hardness of the strong exponential time hypothesis [Backurs, Indyk STOC'15] whereas tree edit distance exhibits the hardness of all pairs shortest paths. Our result provides a matching conditional lower bound for one of the last remaining classic dynamic programming problems.}, }
Endnote
%0 Report %A Bringmann, Karl %A Gawrychowski, Pawe&#322; %A Mozes, Shay %A Weimann, Oren %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless APSP can) : %O Tree Edit Distance Cannot be Computed in Strongly Subcubic Time (unless {APSP} can) %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8A70-3 %U http://arxiv.org/abs/1703.08940 %D 2017 %X The edit distance between two rooted ordered trees with $n$ nodes labeled from an alphabet~$\Sigma$ is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. Tree edit distance is a well known generalization of string edit distance. The fastest known algorithm for tree edit distance runs in cubic $O(n^3)$ time and is based on a similar dynamic programming solution as string edit distance. In this paper we show that a truly subcubic $O(n^{3-\varepsilon})$ time algorithm for tree edit distance is unlikely: For $|\Sigma| = \Omega(n)$, a truly subcubic algorithm for tree edit distance implies a truly subcubic algorithm for the all pairs shortest paths problem. For $|\Sigma| = O(1)$, a truly subcubic algorithm for tree edit distance implies an $O(n^{k-\varepsilon})$ algorithm for finding a maximum weight $k$-clique. Thus, while in terms of upper bounds string edit distance and tree edit distance are highly related, in terms of lower bounds string edit distance exhibits the hardness of the strong exponential time hypothesis [Backurs, Indyk STOC'15] whereas tree edit distance exhibits the hardness of all pairs shortest paths. Our result provides a matching conditional lower bound for one of the last remaining classic dynamic programming problems. %K Computer Science, Data Structures and Algorithms, cs.DS
[156]
K. Bringmann and S. Krinninger, “A Note on Hardness of Diameter Approximation,” 2017. [Online]. Available: http://arxiv.org/abs/1705.02127. (arXiv: 1705.02127)
Abstract
We revisit the hardness of approximating the diameter of a network. In the CONGEST model, $ \tilde \Omega (n) $ rounds are necessary to compute the diameter [Frischknecht et al. SODA'12]. Abboud et al. DISC 2016 extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer $ 1 \leq \ell \leq \operatorname{polylog} (n) $, distinguishing between networks of diameter $ 4 \ell + 2 $ and $ 6 \ell + 1 $ requires $ \tilde \Omega (n) $ rounds. We slightly tighten this result by showing that even distinguishing between diameter $ 2 \ell + 1 $ and $ 3 \ell + 1 $ requires $ \tilde \Omega (n) $ rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition. This is suited for teaching both the lower bound in the CONGEST model and the conditional lower bound in the RAM model.
Export
BibTeX
@online{DBLP:journals/corr/BringmannK17, TITLE = {A Note on Hardness of Diameter Approximation}, AUTHOR = {Bringmann, Karl and Krinninger, Sebastian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1705.02127}, EPRINT = {1705.02127}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We revisit the hardness of approximating the diameter of a network. In the CONGEST model, $ \tilde \Omega (n) $ rounds are necessary to compute the diameter [Frischknecht et al. SODA'12]. Abboud et al. DISC 2016 extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer $ 1 \leq \ell \leq \operatorname{polylog} (n) $, distinguishing between networks of diameter $ 4 \ell + 2 $ and $ 6 \ell + 1 $ requires $ \tilde \Omega (n) $ rounds. We slightly tighten this result by showing that even distinguishing between diameter $ 2 \ell + 1 $ and $ 3 \ell + 1 $ requires $ \tilde \Omega (n) $ rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition. This is suited for teaching both the lower bound in the CONGEST model and the conditional lower bound in the RAM model.}, }
Endnote
%0 Report %A Bringmann, Karl %A Krinninger, Sebastian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T A Note on Hardness of Diameter Approximation : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-89B7-D %U http://arxiv.org/abs/1705.02127 %D 2017 %X We revisit the hardness of approximating the diameter of a network. In the CONGEST model, $ \tilde \Omega (n) $ rounds are necessary to compute the diameter [Frischknecht et al. SODA'12]. Abboud et al. DISC 2016 extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer $ 1 \leq \ell \leq \operatorname{polylog} (n) $, distinguishing between networks of diameter $ 4 \ell + 2 $ and $ 6 \ell + 1 $ requires $ \tilde \Omega (n) $ rounds. We slightly tighten this result by showing that even distinguishing between diameter $ 2 \ell + 1 $ and $ 3 \ell + 1 $ requires $ \tilde \Omega (n) $ rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition. This is suited for teaching both the lower bound in the CONGEST model and the conditional lower bound in the RAM model. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC
[157]
K. Bringmann, T. Dueholm Hansen, and S. Krinninger, “Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs,” 2017. [Online]. Available: http://arxiv.org/abs/1704.08122. (arXiv: 1704.08122)
Abstract
We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with $ n $ nodes and $ m $ edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time $ \tilde O (m^{3/4} n^{3/2}) $, which gives the first improvement over Megiddo's $ \tilde O (n^3) $ algorithm [JACM'83] for sparse graphs. We further demonstrate how to obtain both an algorithm with running time $ n^3 / 2^{\Omega{(\sqrt{\log n})}} $ on general graphs and an algorithm with running time $ \tilde O (n) $ on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest.
Export
BibTeX
@online{DBLP:journals/corr/BringmannHK17, TITLE = {Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs}, AUTHOR = {Bringmann, Karl and Dueholm Hansen, Thomas and Krinninger, Sebastian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1704.08122}, EPRINT = {1704.08122}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with $ n $ nodes and $ m $ edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time $ \tilde O (m^{3/4} n^{3/2}) $, which gives the first improvement over Megiddo's $ \tilde O (n^3) $ algorithm [JACM'83] for sparse graphs. We further demonstrate how to obtain both an algorithm with running time $ n^3 / 2^{\Omega{(\sqrt{\log n})}} $ on general graphs and an algorithm with running time $ \tilde O (n) $ on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest.}, }
Endnote
%0 Report %A Bringmann, Karl %A Dueholm Hansen, Thomas %A Krinninger, Sebastian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-89BC-3 %U http://arxiv.org/abs/1704.08122 %D 2017 %X We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with $ n $ nodes and $ m $ edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time $ \tilde O (m^{3/4} n^{3/2}) $, which gives the first improvement over Megiddo's $ \tilde O (n^3) $ algorithm [JACM'83] for sparse graphs. We further demonstrate how to obtain both an algorithm with running time $ n^3 / 2^{\Omega{(\sqrt{\log n})}} $ on general graphs and an algorithm with running time $ \tilde O (n) $ on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest. %K Computer Science, Data Structures and Algorithms, cs.DS
[158]
K. Bringmann, T. Dueholm Hansen, and S. Krinninger, “Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs,” in 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), Warsaw, Poland, 2017.
Export
BibTeX
@inproceedings{BringmannICALP2017, TITLE = {Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs}, AUTHOR = {Bringmann, Karl and Dueholm Hansen, Thomas and Krinninger, Sebastian}, LANGUAGE = {eng}, ISBN = {978-3-95977-041-5}, URL = {urn:nbn:de:0030-drops-74398}, DOI = {10.4230/LIPIcs.ICALP.2017.124}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)}, EDITOR = {Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca}, PAGES = {1--16}, EID = {124}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {80}, ADDRESS = {Warsaw, Poland}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Dueholm Hansen, Thomas %A Krinninger, Sebastian %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-89C4-F %R 10.4230/LIPIcs.ICALP.2017.124 %U urn:nbn:de:0030-drops-74398 %D 2017 %B 44th International Colloquium on Automata, Languages, and Programming %Z date of event: 2017-07-10 - 2017-07-14 %C Warsaw, Poland %B 44th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Indyk, Piotr; Kuhn, Fabian; Muscholl, Anca %P 1 - 16 %Z sequence number: 124 %I Schloss Dagstuhl %@ 978-3-95977-041-5 %B Leibniz International Proceedings in Informatics %N 80 %U http://drops.dagstuhl.de/opus/volltexte/2017/7439/http://drops.dagstuhl.de/doku/urheberrecht1.html
[159]
K. Bringmann, C. Ikenmeyer, and J. Zuiddam, “On Algebraic Branching Programs of Small Width,” Electronic Colloquium on Computational Complexity (ECCC) : Report Series, vol. 34 (Revision 1), 2017.
Export
BibTeX
@article{BringmannECCC2017, TITLE = {On Algebraic Branching Programs of Small Width}, AUTHOR = {Bringmann, Karl and Ikenmeyer, Christian and Zuiddam, Jeroen}, LANGUAGE = {eng}, ISSN = {1433-8092}, PUBLISHER = {Hasso-Plattner-Institut f{\"u}r Softwaretechnik GmbH}, ADDRESS = {Potsdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, JOURNAL = {Electronic Colloquium on Computational Complexity (ECCC) : Report Series}, VOLUME = {34 (Revision 1)}, PAGES = {1--30}, }
Endnote
%0 Journal Article %A Bringmann, Karl %A Ikenmeyer, Christian %A Zuiddam, Jeroen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Algebraic Branching Programs of Small Width : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-89B2-8 %7 2017 %D 2017 %J Electronic Colloquium on Computational Complexity (ECCC) : Report Series %V 34 (Revision 1) %& 1 %P 1 - 30 %I Hasso-Plattner-Institut f&#252;r Softwaretechnik GmbH %C Potsdam %@ false %U https://eccc.weizmann.ac.il/report/2017/034/
[160]
K. Bringmann, C. Ikenmeyer, and J. Zuiddam, “On Algebraic Branching Programs of Small Width,” 2017. [Online]. Available: http://arxiv.org/abs/1702.05328. (arXiv: 1702.05328)
Abstract
In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds. Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar. As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.
Export
BibTeX
@online{BringmannArXiv2017, TITLE = {On Algebraic Branching Programs of Small Width}, AUTHOR = {Bringmann, Karl and Ikenmeyer, Christian and Zuiddam, Jeroen}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1702.05328}, EPRINT = {1702.05328}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds. Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar. As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.}, }
Endnote
%0 Report %A Bringmann, Karl %A Ikenmeyer, Christian %A Zuiddam, Jeroen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Algebraic Branching Programs of Small Width : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-89A4-8 %U http://arxiv.org/abs/1702.05328 %D 2017 %X In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds. Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar. As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs. %K Computer Science, Computational Complexity, cs.CC,
[161]
K. Bringmann, C. Ikenmeyer, and J. Zuiddam, “On Algebraic Branching Programs of Small Width,” in 32nd Computational Complexity Conference (CCC 2017), Riga, Latvia, 2017.
Export
BibTeX
@inproceedings{BringmannCCC2017, TITLE = {On Algebraic Branching Programs of Small Width}, AUTHOR = {Bringmann, Karl and Ikenmeyer, Christian and Zuiddam, Jeroen}, LANGUAGE = {eng}, ISBN = {978-3-95977-040-8}, URL = {urn:nbn:de:0030-drops-75217}, DOI = {10.4230/LIPIcs.CCC.2017.20}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {32nd Computational Complexity Conference (CCC 2017)}, EDITOR = {O'Donnell, Ryan}, PAGES = {1--31}, EID = {20}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {79}, ADDRESS = {Riga, Latvia}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Ikenmeyer, Christian %A Zuiddam, Jeroen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Algebraic Branching Programs of Small Width : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-FB78-C %R 10.4230/LIPIcs.CCC.2017.20 %U urn:nbn:de:0030-drops-75217 %D 2017 %B 32nd Computational Complexity Conference %Z date of event: 2017-07-06 - 2017-07-09 %C Riga, Latvia %B 32nd Computational Complexity Conference %E O'Donnell, Ryan %P 1 - 31 %Z sequence number: 20 %I Schloss Dagstuhl %@ 978-3-95977-040-8 %B Leibniz International Proceedings in Informatics %N 79 %U http://drops.dagstuhl.de/opus/volltexte/2017/7521/http://drops.dagstuhl.de/doku/urheberrecht1.html
[162]
K. Bringmann, R. Keusch, J. Lengler, Y. Maus, and A. R. Molla, “Greedy Routing and the Algorithmic Small-World Phenomenon,” in PODC’17, ACM Symposium on Principles of Distributed Computing, Washington, DC, USA, 2017.
Export
BibTeX
@inproceedings{Bringmann_PODC2017, TITLE = {Greedy Routing and the Algorithmic Small-World Phenomenon}, AUTHOR = {Bringmann, Karl and Keusch, Ralph and Lengler, Johannes and Maus, Yannic and Molla, Anisur Rahaman}, LANGUAGE = {eng}, ISBN = {978-1-4503-4992-5}, DOI = {10.1145/3087801.3087829}, PUBLISHER = {ACM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {PODC'17, ACM Symposium on Principles of Distributed Computing}, PAGES = {371--380}, ADDRESS = {Washington, DC, USA}, }
Endnote
%0 Conference Proceedings %A Bringmann, Karl %A Keusch, Ralph %A Lengler, Johannes %A Maus, Yannic %A Molla, Anisur Rahaman %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Greedy Routing and the Algorithmic Small-World Phenomenon : %G eng %U http://hdl.handle.net/21.11116/0000-0002-9CF4-B %R 10.1145/3087801.3087829 %D 2017 %B ACM Symposium on Principles of Distributed Computing %Z date of event: 2017-07-25 - 2017-07-27 %C Washington, DC, USA %B PODC'17 %P 371 - 380 %I ACM %@ 978-1-4503-4992-5
[163]
K. Bringmann and M. Künnemann, “Improved Approximation for Fréchet Distance on c-packed Curves Matching Conditional Lower Bounds,” International Journal of Computational Geometry and Applications, vol. 27, no. 1/2, 2017.
Export
BibTeX
@article{Bringmann2017j, TITLE = {Improved Approximation for {F}r\'{e}chet Distance on $c$-packed Curves Matching Conditional Lower Bounds}, AUTHOR = {Bringmann, Karl and K{\"u}nnemann, Marvin}, LANGUAGE = {eng}, ISSN = {0218-1959}, DOI = {10.1142/S0218195917600056}, PUBLISHER = {World Scientific}, ADDRESS = {Singapore}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {International Journal of Computational Geometry and Applications}, VOLUME = {27}, NUMBER = {1/2}, PAGES = {85--119}, }
Endnote
%0 Journal Article %A Bringmann, Karl %A K&#252;nnemann, Marvin %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Improved Approximation for Fr&#233;chet Distance on c-packed Curves Matching Conditional Lower Bounds : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A902-D %R 10.1142/S0218195917600056 %7 2017 %D 2017 %J International Journal of Computational Geometry and Applications %V 27 %N 1/2 %& 85 %P 85 - 119 %I World Scientific %C Singapore %@ false
[164]
J. Bund, C. Lenzen, and M. Medina, “Near-Optimal Metastability-Containing Sorting Networks,” in Proceedings of the 2017 Design, Automation & Test in Europe (DATE 2017), Lausanne, Switzerland, 2017.
Export
BibTeX
@inproceedings{BundDATE2017, TITLE = {Near-Optimal Metastability-Containing Sorting Networks}, AUTHOR = {Bund, Johannes and Lenzen, Christoph and Medina, Moti}, LANGUAGE = {eng}, ISBN = {978-1-5090-5826-6}, DOI = {10.23919/DATE.2017.7926987}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the 2017 Design, Automation \& Test in Europe (DATE 2017)}, PAGES = {226--231}, ADDRESS = {Lausanne, Switzerland}, }
Endnote
%0 Conference Proceedings %A Bund, Johannes %A Lenzen, Christoph %A Medina, Moti %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Near-Optimal Metastability-Containing Sorting Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-571A-2 %R 10.23919/DATE.2017.7926987 %D 2017 %B Design, Automation & Test in Europe Conference & Exhibition %Z date of event: 2017-03-27 - 2017-03-31 %C Lausanne, Switzerland %B Proceedings of the 2017 Design, Automation & Test in Europe %P 226 - 231 %I IEEE %@ 978-1-5090-5826-6
[165]
P. Bürgisser and C. Ikenmeyer, “Fundamental Invariants of Orbit Closures,” Journal of Algebra, vol. 477, 2017.
Export
BibTeX
@article{BI:17, TITLE = {Fundamental Invariants of Orbit Closures}, AUTHOR = {B{\"u}rgisser, Peter and Ikenmeyer, Christian}, LANGUAGE = {eng}, ISSN = {0021-8693}, DOI = {10.1016/j.jalgebra.2016.12.035}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Journal of Algebra}, VOLUME = {477}, PAGES = {390--434}, }
Endnote
%0 Journal Article %A B&#252;rgisser, Peter %A Ikenmeyer, Christian %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fundamental Invariants of Orbit Closures : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4F2E-8 %R 10.1016/j.jalgebra.2016.12.035 %7 2017-01-17 %D 2017 %J Journal of Algebra %V 477 %& 390 %P 390 - 434 %I Elsevier %C Amsterdam %@ false
[166]
P. Bürgisser, C. Ikenmeyer, and J. Hüttenhain, “Permanent versus Determinant: Not via Saturations,” Proceedings of the American Mathematical Society, vol. 145, 2017.
Export
BibTeX
@article{BHI:17, TITLE = {Permanent versus Determinant: {N}ot via Saturations}, AUTHOR = {B{\"u}rgisser, Peter and Ikenmeyer, Christian and H{\"u}ttenhain, Jesko}, LANGUAGE = {eng}, ISSN = {0002-9939}, DOI = {10.1090/proc/13310}, PUBLISHER = {American Mathematical Society}, ADDRESS = {Providence, R.I. [etc.]}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Proceedings of the American Mathematical Society}, VOLUME = {145}, PAGES = {1247--1258}, }
Endnote
%0 Journal Article %A B&#252;rgisser, Peter %A Ikenmeyer, Christian %A H&#252;ttenhain, Jesko %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Permanent versus Determinant: Not via Saturations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4F48-A %R 10.1090/proc/13310 %7 2017 %D 2017 %J Proceedings of the American Mathematical Society %V 145 %& 1247 %P 1247 - 1258 %I American Mathematical Society %C Providence, R.I. [etc.] %@ false
[167]
P. Chalermsook and A. Schmid, “Finding Triangles for Maximum Planar Subgraphs,” in WALCOM: Algorithms and Computation, Hsinchu, Taiwan, 2017.
Export
BibTeX
@inproceedings{PCAS2017, TITLE = {Finding Triangles for Maximum Planar Subgraphs}, AUTHOR = {Chalermsook, Parinya and Schmid, Andreas}, LANGUAGE = {eng}, ISBN = {978-3-319-53924-9}, DOI = {10.1007/978-3-319-53925-6_29}, PUBLISHER = {Springer}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {WALCOM: Algorithms and Computation}, EDITOR = {Poon, Sheung-Hung and Rahman, Md. Saidur and Yen, Hsu-Chun}, PAGES = {373--384}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10167}, ADDRESS = {Hsinchu, Taiwan}, }
Endnote
%0 Conference Proceedings %A Chalermsook, Parinya %A Schmid, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Finding Triangles for Maximum Planar Subgraphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5DDC-F %R 10.1007/978-3-319-53925-6_29 %D 2017 %B 11th International Conference and Workshops on Algorithms and Computation %Z date of event: 2017-03-29 - 2017-03-31 %C Hsinchu, Taiwan %B WALCOM: Algorithms and Computation %E Poon, Sheung-Hung; Rahman, Md. Saidur; Yen, Hsu-Chun %P 373 - 384 %I Springer %@ 978-3-319-53924-9 %B Lecture Notes in Computer Science %N 10167
[168]
P. Chalermsook, S. Das, B. Laekhanukit, and D. Vaz, “Beyond Metric Embedding: Approximating Group Steiner Trees on Bounded Treewidth Graphs,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{doi:10.1137/1.9781611974782.47, TITLE = {Beyond Metric Embedding: {A}pproximating {Group Steiner Trees} on Bounded Treewidth Graphs}, AUTHOR = {Chalermsook, Parinya and Das, Syamantak and Laekhanukit, Bundit and Vaz, Daniel}, LANGUAGE = {eng}, DOI = {10.1137/1.9781611974782.47}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, PAGES = {737--751}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Chalermsook, Parinya %A Das, Syamantak %A Laekhanukit, Bundit %A Vaz, Daniel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Beyond Metric Embedding: Approximating Group Steiner Trees on Bounded Treewidth Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-573D-3 %R 10.1137/1.9781611974782.47 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %P 737 - 751 %I SIAM
[169]
P. Chalermsook and D. Vaz, “New Integrality Gap Results for the Firefighters Problem on Trees,” in Approximation and Online Algorithms (WAOA 2016), Aarhus, Denmark, 2017.
Export
BibTeX
@inproceedings{Chalermsook2017, TITLE = {New Integrality Gap Results for the Firefighters Problem on Trees}, AUTHOR = {Chalermsook, Parinya and Vaz, Daniel}, LANGUAGE = {eng}, ISBN = {978-3-319-51740-7}, DOI = {10.1007/978-3-319-51741-4_6}, PUBLISHER = {Springer}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Approximation and Online Algorithms (WAOA 2016)}, EDITOR = {Jansen, Klaus and Mastrolilli, Monaldo}, PAGES = {65--77}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10138}, ADDRESS = {Aarhus, Denmark}, }
Endnote
%0 Conference Proceedings %A Chalermsook, Parinya %A Vaz, Daniel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T New Integrality Gap Results for the Firefighters Problem on Trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-575B-0 %R 10.1007/978-3-319-51741-4_6 %D 2017 %B 14th Workshop on Approximation and Online Algorithms %Z date of event: 2016-08-25 - 2016-08-26 %C Aarhus, Denmark %B Approximation and Online Algorithms %E Jansen, Klaus; Mastrolilli, Monaldo %P 65 - 77 %I Springer %@ 978-3-319-51740-7 %B Lecture Notes in Computer Science %N 10138
[170]
L. S. Chandran, D. Issac, and A. Karrenbauer, “On the Parameterized Complexity of Biclique Cover and Partition,” in 11th International Symposium on Parameterized and Exact Computation (IPEC 2016), Aarhus, Denmark, 2017.
Export
BibTeX
@inproceedings{BicliqueFPT, TITLE = {On the Parameterized Complexity of Biclique Cover and Partition}, AUTHOR = {Chandran, L. Sunil and Issac, Davis and Karrenbauer, Andreas}, LANGUAGE = {eng}, ISBN = {978-3-95977-023-1}, URL = {urn:nbn:de:0030-drops-69293}, DOI = {10.4230/LIPIcs.IPEC.2016.11}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2016}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {11th International Symposium on Parameterized and Exact Computation (IPEC 2016)}, EDITOR = {Guo, Jiong and Hermelin, Danny}, PAGES = {1--13}, EID = {11}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {63}, ADDRESS = {Aarhus, Denmark}, }
Endnote
%0 Conference Proceedings %A Chandran, L. Sunil %A Issac, Davis %A Karrenbauer, Andreas %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Parameterized Complexity of Biclique Cover and Partition : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-53DB-3 %R 10.4230/LIPIcs.IPEC.2016.11 %U urn:nbn:de:0030-drops-69293 %D 2017 %B 11th International Symposium on Parameterized and Exact Computation %Z date of event: 2016-08-24 - 2016-08-26 %C Aarhus, Denmark %B 11th International Symposium on Parameterized and Exact Computation %E Guo, Jiong; Hermelin, Danny %P 1 - 13 %Z sequence number: 11 %I Schloss Dagstuhl %@ 978-3-95977-023-1 %B Leibniz International Proceedings in Informatics %N 63 %U http://drops.dagstuhl.de/opus/volltexte/2017/6929/http://drops.dagstuhl.de/doku/urheberrecht1.html
[171]
L. Chiantini, C. Ikenmeyer, J. M. Landsberg, and G. Ottaviani, “The Geometry of Rank Decompositions of Matrix Multiplication I: 2x2 Matrices,” Experimental Mathematics, 2017.
Export
BibTeX
@article{Chiantini2017, TITLE = {The geometry of rank decompositions of matrix multiplication I: $2\times 2$ matrices}, AUTHOR = {Chiantini, Luca and Ikenmeyer, Christian and Landsberg, J. M. and Ottaviani, Giorgio}, LANGUAGE = {eng}, ISSN = {1058-6458}, DOI = {10.1080/10586458.2017.1403981}, PUBLISHER = {Taylor \& Francis}, ADDRESS = {Boston, MA}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, JOURNAL = {Experimental Mathematics}, }
Endnote
%0 Journal Article %A Chiantini, Luca %A Ikenmeyer, Christian %A Landsberg, J. M. %A Ottaviani, Giorgio %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T The Geometry of Rank Decompositions of Matrix Multiplication I: 2x2 Matrices : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AB12-9 %R 10.1080/10586458.2017.1403981 %7 2017 %D 2017 %J Experimental Mathematics %I Taylor & Francis %C Boston, MA %@ false
[172]
A. Choudhary, “Approximation Algorithms for Vietoris-Rips and Ĉech Filtrations,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
Persistent Homology is a tool to analyze and visualize the shape of data from a topological viewpoint. It computes persistence, which summarizes the evolution of topological and geometric information about metric spaces over multiple scales of distances. While computing persistence is quite efficient for low-dimensional topological features, it becomes overwhelmingly expensive for medium to high-dimensional features. In this thesis, we attack this computational problem from several different angles. We present efficient techniques to approximate the persistence of metric spaces. Three of our methods are tailored towards general point clouds in Euclidean spaces. We make use of high dimensional lattice geometry to reduce the cost of the approximations. In particular, we discover several properties of the Permutahedral lattice, whose Voronoi cell is well-known for its combinatorial properties. The last method is suitable for point clouds with low intrinsic dimension, where we exploit the structural properties of the point set to tame the complexity. In some cases, we achieve a reduction in size complexity by trading off the quality of the approximation. Two of our methods work particularly well in conjunction with dimension-reduction techniques: we arrive at the first approximation schemes whose complexities are only polynomial in the size of the point cloud, and independent of the ambient dimension. On the other hand, we provide a lower bound result: we construct a point cloud that requires super-polynomial complexity for a high-quality approximation of the persistence. Together with our approximation schemes, we show that polynomial complexity is achievable for rough approximations, but impossible for sufficiently fine approximations. For some metric spaces, the intrinsic dimension is low in small neighborhoods of the input points, but much higher for large scales of distances. We develop a concept of local intrinsic dimension to capture this property. We also present several applications of this concept, including an approximation method for persistence. This thesis is written in English.
Export
BibTeX
@phdthesis{Choudharyphd2017, TITLE = {Approximation Algorithms for {V}ietoris-Rips and \v{C}ech Filtrations}, AUTHOR = {Choudhary, Aruni}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-269597}, DOI = {10.22028/D291-26959}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {Persistent Homology is a tool to analyze and visualize the shape of data from a topological viewpoint. It computes persistence, which summarizes the evolution of topological and geometric information about metric spaces over multiple scales of distances. While computing persistence is quite efficient for low-dimensional topological features, it becomes overwhelmingly expensive for medium to high-dimensional features. In this thesis, we attack this computational problem from several different angles. We present efficient techniques to approximate the persistence of metric spaces. Three of our methods are tailored towards general point clouds in Euclidean spaces. We make use of high dimensional lattice geometry to reduce the cost of the approximations. In particular, we discover several properties of the Permutahedral lattice, whose Voronoi cell is well-known for its combinatorial properties. The last method is suitable for point clouds with low intrinsic dimension, where we exploit the structural properties of the point set to tame the complexity. In some cases, we achieve a reduction in size complexity by trading off the quality of the approximation. Two of our methods work particularly well in conjunction with dimension-reduction techniques: we arrive at the first approximation schemes whose complexities are only polynomial in the size of the point cloud, and independent of the ambient dimension. On the other hand, we provide a lower bound result: we construct a point cloud that requires super-polynomial complexity for a high-quality approximation of the persistence. Together with our approximation schemes, we show that polynomial complexity is achievable for rough approximations, but impossible for sufficiently fine approximations. For some metric spaces, the intrinsic dimension is low in small neighborhoods of the input points, but much higher for large scales of distances. We develop a concept of local intrinsic dimension to capture this property. We also present several applications of this concept, including an approximation method for persistence. This thesis is written in English.}, }
Endnote
%0 Thesis %A Choudhary, Aruni %A referee: Kerber, Michael %Y Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximation Algorithms for Vietoris-Rips and &#264;ech Filtrations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-8D63-5 %U urn:nbn:de:bsz:291-scidok-ds-269597 %R 10.22028/D291-26959 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P 123 p. %V phd %9 phd %X Persistent Homology is a tool to analyze and visualize the shape of data from a topological viewpoint. It computes persistence, which summarizes the evolution of topological and geometric information about metric spaces over multiple scales of distances. While computing persistence is quite efficient for low-dimensional topological features, it becomes overwhelmingly expensive for medium to high-dimensional features. In this thesis, we attack this computational problem from several different angles. We present efficient techniques to approximate the persistence of metric spaces. Three of our methods are tailored towards general point clouds in Euclidean spaces. We make use of high dimensional lattice geometry to reduce the cost of the approximations. In particular, we discover several properties of the Permutahedral lattice, whose Voronoi cell is well-known for its combinatorial properties. The last method is suitable for point clouds with low intrinsic dimension, where we exploit the structural properties of the point set to tame the complexity. In some cases, we achieve a reduction in size complexity by trading off the quality of the approximation. Two of our methods work particularly well in conjunction with dimension-reduction techniques: we arrive at the first approximation schemes whose complexities are only polynomial in the size of the point cloud, and independent of the ambient dimension. On the other hand, we provide a lower bound result: we construct a point cloud that requires super-polynomial complexity for a high-quality approximation of the persistence. Together with our approximation schemes, we show that polynomial complexity is achievable for rough approximations, but impossible for sufficiently fine approximations. For some metric spaces, the intrinsic dimension is low in small neighborhoods of the input points, but much higher for large scales of distances. We develop a concept of local intrinsic dimension to capture this property. We also present several applications of this concept, including an approximation method for persistence. This thesis is written in English. %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26911
[173]
C. Croitoru, “Graph Models for Rational Social Interaction,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{CroitoruPhd2017, TITLE = {Graph Models for Rational Social Interaction}, AUTHOR = {Croitoru, Cosmina}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-270576}, DOI = {10.22028/D291-27057}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Croitoru, Cosmina %Y Mehlhorn, Kurt %A referee: Amgoud, Leila %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Graph Models for Rational Social Interaction : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-87DE-5 %R 10.22028/D291-27057 %U urn:nbn:de:bsz:291-scidok-ds-270576 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P X, 75 p. %V phd %9 phd %U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/26954
[174]
M. Cygan, M. Pilipczuk, M. Pilipczuk, E. J. van Leeuwen, and M. Wrochna, “Polynomial Kernelization for Removing Induced Claws and Diamonds,” Theory of Computing Systems, vol. 60, no. 4, 2017.
Export
BibTeX
@article{CyganAlgorithmica2016, TITLE = {Polynomial Kernelization for Removing Induced Claws and Diamonds}, AUTHOR = {Cygan, Marek and Pilipczuk, Marcin and Pilipczuk, Micha{\l} and van Leeuwen, Erik Jan and Wrochna, Marcin}, LANGUAGE = {eng}, ISSN = {1432-4350}, DOI = {10.1007/s00224-016-9689-x}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Theory of Computing Systems}, VOLUME = {60}, NUMBER = {4}, PAGES = {615--636}, }
Endnote
%0 Journal Article %A Cygan, Marek %A Pilipczuk, Marcin %A Pilipczuk, Micha&#322; %A van Leeuwen, Erik Jan %A Wrochna, Marcin %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Polynomial Kernelization for Removing Induced Claws and Diamonds : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-8606-6 %R 10.1007/s00224-016-9689-x %7 2016-06-23 %D 2017 %J Theory of Computing Systems %V 60 %N 4 %& 615 %P 615 - 636 %I Springer %C New York, NY %@ false
[175]
J. Diaz, O. Pottonen, M. Serna, and E. J. van Leeuwen, “Complexity of Metric Dimension on Planar Graphs,” Journal of Computer and System Sciences, vol. 83, no. 1, 2017.
Export
BibTeX
@article{Diaz2017, TITLE = {Complexity of Metric Dimension on Planar Graphs}, AUTHOR = {Diaz, Josep and Pottonen, Olli and Serna, Maria and van Leeuwen, Erik Jan}, ISSN = {0022-0000}, DOI = {10.1016/j.jcss.2016.06.006}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Journal of Computer and System Sciences}, VOLUME = {83}, NUMBER = {1}, PAGES = {132--158}, }
Endnote
%0 Journal Article %A Diaz, Josep %A Pottonen, Olli %A Serna, Maria %A van Leeuwen, Erik Jan %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Complexity of Metric Dimension on Planar Graphs : %U http://hdl.handle.net/11858/00-001M-0000-002B-A574-5 %R 10.1016/j.jcss.2016.06.006 %7 2016 %D 2017 %J Journal of Computer and System Sciences %V 83 %N 1 %& 132 %P 132 - 158 %I Elsevier %C Amsterdam %@ false
[176]
M. Dirnberger and K. Mehlhorn, “Characterizing Networks Formed by P. Polycephalum,” Journal of Physics D: Applied Physics, vol. 50, no. 22, 2017.
Export
BibTeX
@article{Dirnberg_Mehlhorn2017, TITLE = {Characterizing networks formed by \textsl{P. polycephalum}}, AUTHOR = {Dirnberger, Michael and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISSN = {0022-3727}, DOI = {10.1088/1361-6463/aa6e7b}, PUBLISHER = {IOP Publishing}, ADDRESS = {Bristol}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Journal of Physics D: Applied Physics}, VOLUME = {50}, NUMBER = {22}, EID = {224002}, }
Endnote
%0 Journal Article %A Dirnberger, Michael %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Characterizing Networks Formed by P. Polycephalum : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-56FA-2 %R 10.1088/1361-6463/aa6e7b %7 2017 %D 2017 %J Journal of Physics D: Applied Physics %O J. Phys. D: Appl. Phys. %V 50 %N 22 %Z sequence number: 224002 %I IOP Publishing %C Bristol %@ false
[177]
M. Dirnberger, K. Mehlhorn, and T. Mehlhorn, “Introducing the Slime Mold Graph Repository,” Journal of Physics D: Applied Physics, vol. 50, no. 26, 2017.
Export
BibTeX
@article{Dirnberger2017, TITLE = {Introducing the Slime Mold Graph Repository}, AUTHOR = {Dirnberger, Michael and Mehlhorn, Kurt and Mehlhorn, Tim}, LANGUAGE = {eng}, ISSN = {0022-3727}, DOI = {10.1088/1361-6463/aa7326}, PUBLISHER = {IOP Publishing}, ADDRESS = {Bristol}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Journal of Physics D: Applied Physics}, VOLUME = {50}, NUMBER = {26}, EID = {264001}, }
Endnote
%0 Journal Article %A Dirnberger, Michael %A Mehlhorn, Kurt %A Mehlhorn, Tim %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Introducing the Slime Mold Graph Repository : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8464-B %R 10.1088/1361-6463/aa7326 %7 2017 %D 2017 %J Journal of Physics D: Applied Physics %O J. Phys. D: Appl. Phys. %V 50 %N 26 %Z sequence number: 264001 %I IOP Publishing %C Bristol %@ false
[178]
M. Dirnberger, “Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum,” Universität des Saarlandes, Saarbrücken, 2017.
Abstract
This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms.
Export
BibTeX
@phdthesis{dirnbergerphd17, TITLE = {Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum}, AUTHOR = {Dirnberger, Michael}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69424}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms.}, }
Endnote
%0 Thesis %A Dirnberger, Michael %Y Mehlhorn, Kurt %A referee: Grube, Martin %A referee: D&#246;bereiner, Hans-G&#252;nther %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-DE4F-0 %U urn:nbn:de:bsz:291-scidok-69424 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P XV, 193 p. %V phd %9 phd %X This doctoral thesis aims towards distributed natural computing inspired by the slime mold Physarum polycephalum. The vein networks formed by this organism presumably support efficient transport of protoplasmic fluid. Devising models which capture the natural efficiency of the organism and form a suitable basis for the development of natural computing algorithms is an interesting and challenging goal. We start working towards this goal by designing and executing wet-lab experi- ments geared towards producing a large number of images of the vein networks of P. polycephalum. Next, we turn the depicted vein networks into graphs using our own custom software called Nefi. This enables a detailed numerical study, yielding a catalogue of characterizing observables spanning a wide array of different graph properties. To share our results and data, i.e. raw experimental data, graphs and analysis results, we introduce a dedicated repository revolving around slime mold data, the Smgr. The purpose of this repository is to promote data reuse and to foster a practice of increased data sharing. Finally we present a model based on interacting electronic circuits including current controlled voltage sources, which mimics the emergent flow patterns observed in live P. polycephalum. The model is simple, distributed and robust to changes in the underlying network topology. Thus it constitutes a promising basis for the development of distributed natural computing algorithms. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6942/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[179]
L. Duraj, M. Künnemann, and A. Polak, “Tight Conditional Lower Bounds for Longest Common Increasing Subsequence,” 2017. [Online]. Available: http://arxiv.org/abs/1709.10075. (arXiv: 1709.10075)
Abstract
We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called $k$-LCIS: Given $k$ integer sequences $X_1,\dots,X_k$ of length at most $n$, the task is to determine the length of the longest common subsequence of $X_1,\dots,X_k$ that is also strictly increasing. Especially for the case of $k=2$ (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case. Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound (1) to rule out $O((nL)^{1-\varepsilon})$ time algorithms for LCIS, where $L$ denotes the solution size, (2) to rule out $O(n^{k-\varepsilon})$ time algorithms for $k$-LCIS, and (3) to follow already from weaker variants of SETH. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem.
Export
BibTeX
@online{Duraj_arXiv1709.10075, TITLE = {Tight Conditional Lower Bounds for Longest Common Increasing Subsequence}, AUTHOR = {Duraj, Lech and K{\"u}nnemann, Marvin and Polak, Adam}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1709.10075}, EPRINT = {1709.10075}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called $k$-LCIS: Given $k$ integer sequences $X_1,\dots,X_k$ of length at most $n$, the task is to determine the length of the longest common subsequence of $X_1,\dots,X_k$ that is also strictly increasing. Especially for the case of $k=2$ (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case. Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound (1) to rule out $O((nL)^{1-\varepsilon})$ time algorithms for LCIS, where $L$ denotes the solution size, (2) to rule out $O(n^{k-\varepsilon})$ time algorithms for $k$-LCIS, and (3) to follow already from weaker variants of SETH. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem.}, }
Endnote
%0 Report %A Duraj, Lech %A K&#252;nnemann, Marvin %A Polak, Adam %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Tight Conditional Lower Bounds for Longest Common Increasing Subsequence : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A8EB-8 %U http://arxiv.org/abs/1709.10075 %D 2017 %X We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called $k$-LCIS: Given $k$ integer sequences $X_1,\dots,X_k$ of length at most $n$, the task is to determine the length of the longest common subsequence of $X_1,\dots,X_k$ that is also strictly increasing. Especially for the case of $k=2$ (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case. Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound (1) to rule out $O((nL)^{1-\varepsilon})$ time algorithms for LCIS, where $L$ denotes the solution size, (2) to rule out $O(n^{k-\varepsilon})$ time algorithms for $k$-LCIS, and (3) to follow already from weaker variants of SETH. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem. %K Computer Science, Computational Complexity, cs.CC
[180]
L. Duraj, M. Künnemann, and A. Polak, “Tight Conditional Lower Bounds for Longest Common Increasing Subsequence,” in 12th International Symposium on Parameterized and Exact Computation (IPEC 2017), Vienna, Austria, 2017.
Export
BibTeX
@inproceedings{Duraj_IPEC2017, TITLE = {Tight Conditional Lower Bounds for Longest Common Increasing Subsequence}, AUTHOR = {Duraj, Lech and K{\"u}nnemann, Marvin and Polak, Adam}, LANGUAGE = {eng}, ISBN = {978-3-95977-051-4}, URL = {urn:nbn:de:0030-drops-85706}, DOI = {10.4230/LIPIcs.IPEC.2017.15}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {12th International Symposium on Parameterized and Exact Computation (IPEC 2017)}, EDITOR = {Lokshtanov, Daniel and Nishimura, Naomi}, PAGES = {1--13}, EID = {15}, SERIES = {Leibniz International Proceedings in Informatics}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A Duraj, Lech %A K&#252;nnemann, Marvin %A Polak, Adam %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Tight Conditional Lower Bounds for Longest Common Increasing Subsequence : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A8E9-A %R 10.4230/LIPIcs.IPEC.2017.15 %U urn:nbn:de:0030-drops-85706 %D 2017 %B 12th International Symposium on Parameterized and Exact Computation %Z date of event: 2017-09-06 - 2017-09-08 %C Vienna, Austria %B 12th International Symposium on Parameterized and Exact Computation %E Lokshtanov, Daniel; Nishimura, Naomi %P 1 - 13 %Z sequence number: 15 %I Schloss Dagstuhl %@ 978-3-95977-051-4 %B Leibniz International Proceedings in Informatics %U http://drops.dagstuhl.de/opus/volltexte/2018/8570/http://drops.dagstuhl.de/doku/urheberrecht1.html
[181]
K. Dutta, A. Ghosh, B. Jartoux, and N. H. Mustafa, “Shallow Packings, Semialgebraic Set Systems, Macbeath Regions and Polynomial Partitioning,” in 33rd International Symposium on Computational Geometry (SoCG 2017), Brisbane, Australia, 2017.
Export
BibTeX
@inproceedings{DuttaGJM-Mnets-17, TITLE = {Shallow Packings, Semialgebraic Set Systems, {Macbeath} Regions and Polynomial Partitioning}, AUTHOR = {Dutta, Kunal and Ghosh, Arijit and Jartoux, Bruno and Mustafa, Nabil H.}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-038-5}, URL = {urn:nbn:de:0030-drops-71991}, DOI = {10.4230/LIPIcs.SoCG.2017.38}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {33rd International Symposium on Computational Geometry (SoCG 2017)}, EDITOR = {Aronov, Boris and Katz, Matthew J.}, PAGES = {1--15}, EID = {38}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {77}, ADDRESS = {Brisbane, Australia}, }
Endnote
%0 Conference Proceedings %A Dutta, Kunal %A Ghosh, Arijit %A Jartoux, Bruno %A Mustafa, Nabil H. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Shallow Packings, Semialgebraic Set Systems, Macbeath Regions and Polynomial Partitioning : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-7941-7 %R 10.4230/LIPIcs.SoCG.2017.38 %U urn:nbn:de:0030-drops-71991 %D 2017 %B 33rd International Symposium on Computational Geometry %Z date of event: 2017-07-04 - 2017-07-07 %C Brisbane, Australia %B 33rd International Symposium on Computational Geometry %E Aronov, Boris; Katz, Matthew J. %P 1 - 15 %Z sequence number: 38 %I Schloss Dagstuhl %@ 978-3-95977-038-5 %B Leibniz International Proceedings in Informatics %N 77 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2017/7199/http://drops.dagstuhl.de/doku/urheberrecht1.html
[182]
P. Dütting and T. Kesselheim, “Best-Response Dynamics in Combinatorial Auctions with Item Bidding,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{doi:10.1137/1.9781611974782.33, TITLE = {Best-Response Dynamics in Combinatorial Auctions with Item Bidding}, AUTHOR = {D{\"u}tting, Paul and Kesselheim, Thomas}, LANGUAGE = {eng}, ISBN = {978-1-61197-478-2}, DOI = {10.1137/1.9781611974782.33}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, EDITOR = {Klein, Philip N.}, PAGES = {521--533}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A D&#252;tting, Paul %A Kesselheim, Thomas %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Best-Response Dynamics in Combinatorial Auctions with Item Bidding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4E5E-2 %R 10.1137/1.9781611974782.33 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %E Klein, Philip N. %P 521 - 533 %I SIAM %@ 978-1-61197-478-2
[183]
M. Ernestus, S. Friedrichs, M. Hemmer, J. Kokemüller, A. Kröller, M. Moeini, and C. Schmidt, “Algorithms for Art Gallery Illumination,” Journal of Global Optimization, vol. 68, no. 1, 2017.
Export
BibTeX
@article{ErnestusJGO2016, TITLE = {Algorithms for Art Gallery Illumination}, AUTHOR = {Ernestus, Maximilian and Friedrichs, Stephan and Hemmer, Michael and Kokem{\"u}ller, Jan and Kr{\"o}ller, Alexander and Moeini, Mahdi and Schmidt, Christiane}, LANGUAGE = {eng}, ISSN = {0925-5001}, DOI = {10.1007/s10898-016-0452-2}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Journal of Global Optimization}, VOLUME = {68}, NUMBER = {1}, PAGES = {23--45}, }
Endnote
%0 Journal Article %A Ernestus, Maximilian %A Friedrichs, Stephan %A Hemmer, Michael %A Kokem&#252;ller, Jan %A Kr&#246;ller, Alexander %A Moeini, Mahdi %A Schmidt, Christiane %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Algorithms for Art Gallery Illumination : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-0B3C-1 %R 10.1007/s10898-016-0452-2 %7 2016 %D 2017 %J Journal of Global Optimization %V 68 %N 1 %& 23 %P 23 - 45 %I Springer %C New York, NY %@ false
[184]
G. Even and M. Medina, “Online Packet-Routing in Grids with Bounded Buffers,” Algorithmica, vol. 78, no. 3, 2017.
Export
BibTeX
@article{MedinaAlgorithmica2016, TITLE = {Online Packet-Routing in Grids with Bounded Buffers}, AUTHOR = {Even, Guy and Medina, Moti}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-016-0177-0}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Algorithmica}, VOLUME = {78}, NUMBER = {3}, PAGES = {819--868}, }
Endnote
%0 Journal Article %A Even, Guy %A Medina, Moti %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Online Packet-Routing in Grids with Bounded Buffers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-85DD-E %R 10.1007/s00453-016-0177-0 %7 2016-07-11 %D 2017 %J Algorithmica %V 78 %N 3 %& 819 %P 819 - 868 %I Springer-Verlag %C New York %@ false
[185]
S. Friedrichs, “Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding,” Unversität des Saarlandes, Saarbrücken, 2017.
Abstract
We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver.
Export
BibTeX
@phdthesis{Friedrichsphd2017, TITLE = {Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding}, AUTHOR = {Friedrichs, Stephan}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-69660}, SCHOOL = {Unversit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, ABSTRACT = {We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver.}, }
Endnote
%0 Thesis %A Friedrichs, Stephan %Y Lenzen, Christoph %A referee: Mehlhorn, Kurt %A referee: Ghaffari, Mohsen %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-E9A7-B %U urn:nbn:de:bsz:291-scidok-69660 %I Unversit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P x, 226 p. %V phd %9 phd %X We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver. %U http://scidok.sulb.uni-saarland.de/volltexte/2017/6966/http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de
[186]
M. Függer, C. Lenzen, and T. Polzer, “Metastability-Aware Memory-Efficient Time-to-Digital Converters,” in 23rd IEEE International Symposium on Asynchronous Circuits and Systems, San Diego, CA, USA, 2017.
Export
BibTeX
@inproceedings{fueggerASYNC2017, TITLE = {Metastability-Aware Memory-Efficient Time-to-Digital Converters}, AUTHOR = {F{\"u}gger, Matthias and Lenzen, Christoph and Polzer, Thomas}, LANGUAGE = {eng}, ISBN = {978-1-5386-2749-5}, DOI = {10.1109/ASYNC.2017.12}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {23rd IEEE International Symposium on Asynchronous Circuits and Systems}, PAGES = {49--56}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A F&#252;gger, Matthias %A Lenzen, Christoph %A Polzer, Thomas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Metastability-Aware Memory-Efficient Time-to-Digital Converters : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-80CD-D %R 10.1109/ASYNC.2017.12 %D 2017 %B 23rd IEEE International Symposium on Asynchronous Circuits and Systems %Z date of event: 2017-05-21 - 2017-05-24 %C San Diego, CA, USA %B 23rd IEEE International Symposium on Asynchronous Circuits and Systems %P 49 - 56 %I IEEE %@ 978-1-5386-2749-5
[187]
W. Gálvez, F. Grandoni, S. Heydrich, S. Ingala, A. Khan, and A. Wiese, “Approximating Geometric Knapsack via L-Packings,” in 58th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2017), Berkeley, CA, USA, 2017.
Export
BibTeX
@inproceedings{Galvez_FOCS2017, TITLE = {Approximating Geometric Knapsack via {L}-Packings}, AUTHOR = {G{\'a}lvez, Waldo and Grandoni, Fabrizio and Heydrich, Sandy and Ingala, Salvatore and Khan, Arindam and Wiese, Andreas}, LANGUAGE = {eng}, ISBN = {978-1-5386-3464-6}, DOI = {10.1109/FOCS.2017.32}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {58th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2017)}, PAGES = {260--271}, ADDRESS = {Berkeley, CA, USA}, }
Endnote
%0 Conference Proceedings %A G&#225;lvez, Waldo %A Grandoni, Fabrizio %A Heydrich, Sandy %A Ingala, Salvatore %A Khan, Arindam %A Wiese, Andreas %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Approximating Geometric Knapsack via L-Packings : %G eng %U http://hdl.handle.net/21.11116/0000-0000-0469-6 %R 10.1109/FOCS.2017.32 %D 2017 %B 58th Annual IEEE Symposium on Foundations of Computer Science %Z date of event: 2017-10-15 - 2017-10-17 %C Berkeley, CA, USA %B 58th Annual IEEE Symposium on Foundations of Computer Science %P 260 - 271 %I IEEE %@ 978-1-5386-3464-6
[188]
J. Garg, M. Hoefer, and K. Mehlhorn, “Approximating the Nash Social Welfare with Budget-Additive Valuations,” 2017. [Online]. Available: http://arxiv.org/abs/1707.04428. (arXiv: 1707.04428)
Abstract
We present the first constant-factor approximation algorithm for maximizing the Nash social welfare when allocating indivisible items to agents with budget-additive valuation functions. Budget-additive valuations represent an important class of submodular functions. They have attracted a lot of research interest in recent years due to many interesting applications. For every $\varepsilon > 0$, our algorithm obtains a $(2.404 + \varepsilon)$-approximation in time polynomial in the input size and $1/\varepsilon$. Our algorithm relies on rounding an approximate equilibrium in a linear Fisher market where sellers have earning limits (upper bounds on the amount of money they want to earn) and buyers have utility limits (upper bounds on the amount of utility they want to achieve). In contrast to markets with either earning or utility limits, these markets have not been studied before. They turn out to have fundamentally different properties. Although the existence of equilibria is not guaranteed, we show that the market instances arising from the Nash social welfare problem always have an equilibrium. Further, we show that the set of equilibria is not convex, answering a question of [Cole et al, EC 2017]. We design an FPTAS to compute an approximate equilibrium, a result that may be of independent interest.
Export
BibTeX
@online{GargHoeferMehlhorn2017, TITLE = {Approximating the {Nash} Social Welfare with Budget-Additive Valuations}, AUTHOR = {Garg, Jugal and Hoefer, Martin and Mehlhorn, Kurt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1707.04428}, EPRINT = {1707.04428}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We present the first constant-factor approximation algorithm for maximizing the Nash social welfare when allocating indivisible items to agents with budget-additive valuation functions. Budget-additive valuations represent an important class of submodular functions. They have attracted a lot of research interest in recent years due to many interesting applications. For every $\varepsilon > 0$, our algorithm obtains a $(2.404 + \varepsilon)$-approximation in time polynomial in the input size and $1/\varepsilon$. Our algorithm relies on rounding an approximate equilibrium in a linear Fisher market where sellers have earning limits (upper bounds on the amount of money they want to earn) and buyers have utility limits (upper bounds on the amount of utility they want to achieve). In contrast to markets with either earning or utility limits, these markets have not been studied before. They turn out to have fundamentally different properties. Although the existence of equilibria is not guaranteed, we show that the market instances arising from the Nash social welfare problem always have an equilibrium. Further, we show that the set of equilibria is not convex, answering a question of [Cole et al, EC 2017]. We design an FPTAS to compute an approximate equilibrium, a result that may be of independent interest.}, }
Endnote
%0 Report %A Garg, Jugal %A Hoefer, Martin %A Mehlhorn, Kurt %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Approximating the Nash Social Welfare with Budget-Additive Valuations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-E7D6-2 %U http://arxiv.org/abs/1707.04428 %D 2017 %X We present the first constant-factor approximation algorithm for maximizing the Nash social welfare when allocating indivisible items to agents with budget-additive valuation functions. Budget-additive valuations represent an important class of submodular functions. They have attracted a lot of research interest in recent years due to many interesting applications. For every $\varepsilon > 0$, our algorithm obtains a $(2.404 + \varepsilon)$-approximation in time polynomial in the input size and $1/\varepsilon$. Our algorithm relies on rounding an approximate equilibrium in a linear Fisher market where sellers have earning limits (upper bounds on the amount of money they want to earn) and buyers have utility limits (upper bounds on the amount of utility they want to achieve). In contrast to markets with either earning or utility limits, these markets have not been studied before. They turn out to have fundamentally different properties. Although the existence of equilibria is not guaranteed, we show that the market instances arising from the Nash social welfare problem always have an equilibrium. Further, we show that the set of equilibria is not convex, answering a question of [Cole et al, EC 2017]. We design an FPTAS to compute an approximate equilibrium, a result that may be of independent interest. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Computer Science and Game Theory, cs.GT
[189]
F. Gesmundo, C. Ikenmeyer, and G. Panova, “Geometric Complexity Theory and Matrix Powering,” Differential Geometry and its Applications, vol. 55, 2017.
Export
BibTeX
@article{Gesmundo2017, TITLE = {Geometric Complexity Theory and Matrix Powering}, AUTHOR = {Gesmundo, Fulvio and Ikenmeyer, Christian and Panova, Greta}, LANGUAGE = {eng}, ISSN = {0926-2245}, DOI = {10.1016/j.difgeo.2017.07.001}, PUBLISHER = {North-Holland}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Differential Geometry and its Applications}, VOLUME = {55}, PAGES = {106--127}, }
Endnote
%0 Journal Article %A Gesmundo, Fulvio %A Ikenmeyer, Christian %A Panova, Greta %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Geometric Complexity Theory and Matrix Powering : %G eng %U http://hdl.handle.net/21.11116/0000-0000-0453-E %R 10.1016/j.difgeo.2017.07.001 %7 2017 %D 2017 %K Computer Science, Computational Complexity, cs.CC,Mathematics, Representation Theory, math.RT, %J Differential Geometry and its Applications %V 55 %& 106 %P 106 - 127 %I North-Holland %C Amsterdam %@ false
[190]
M. Goswami, X. Gu, V. P. Pingali, and G. Telang, “Computing Teichmüller Maps Between Polygons,” Foundations of Computational Mathematics, vol. 17, no. 2, 2017.
Export
BibTeX
@article{Goswami2017, TITLE = {Computing {T}eichm{\"u}ller Maps Between Polygons}, AUTHOR = {Goswami, Mayank and Gu, Xianfeng and Pingali, Vamsi P. and Telang, Gaurish}, LANGUAGE = {eng}, ISSN = {1615-3375}, DOI = {10.1007/s10208-015-9294-4}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Foundations of Computational Mathematics}, VOLUME = {17}, NUMBER = {2}, PAGES = {497--526}, }
Endnote
%0 Journal Article %A Goswami, Mayank %A Gu, Xianfeng %A Pingali, Vamsi P. %A Telang, Gaurish %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Computing Teichm&#252;ller Maps Between Polygons : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-3E48-3 %R 10.1007/s10208-015-9294-4 %7 2015-11-25 %D 2017 %J Foundations of Computational Mathematics %V 17 %N 2 %& 497 %P 497 - 526 %I Springer %C New York, NY %@ false
[191]
M. Goswami, R. Pagh, F. Silvestri, and J. Sivertsen, “Distance Sensitive Bloom Filters Without False Negatives,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{GoswamiSODA2017, TITLE = {Distance Sensitive Bloom Filters Without False Negatives}, AUTHOR = {Goswami, Mayank and Pagh, Rasmus and Silvestri, Francesco and Sivertsen, Johan}, LANGUAGE = {eng}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, PAGES = {257--269}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Goswami, Mayank %A Pagh, Rasmus %A Silvestri, Francesco %A Sivertsen, Johan %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Distance Sensitive Bloom Filters Without False Negatives : %G eng %U http://hdl.handle.net/21.11116/0000-0001-4FA1-1 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %P 257 - 269 %I SIAM
[192]
F. Grandoni, T. Mömke, A. Wiese, and H. Zhou, “To Augment or Not to Augment: Solving Unsplittable Flow on a Path by Creating Slack,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{doi:10.1137/1.9781611974782.159, TITLE = {To Augment or Not to Augment: {S}olving Unsplittable Flow on a Path by Creating Slack}, AUTHOR = {Grandoni, Fabrizio and M{\"o}mke, Tobias and Wiese, Andreas and Zhou, Hang}, LANGUAGE = {eng}, ISBN = {978-1-61197-478-2}, DOI = {10.1137/1.9781611974782.159}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, EDITOR = {Klein, Philip N.}, PAGES = {2411--2422}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Grandoni, Fabrizio %A M&#246;mke, Tobias %A Wiese, Andreas %A Zhou, Hang %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T To Augment or Not to Augment: Solving Unsplittable Flow on a Path by Creating Slack : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4EB5-B %R 10.1137/1.9781611974782.159 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %E Klein, Philip N. %P 2411 - 2422 %I SIAM %@ 978-1-61197-478-2
[193]
S. Heydrich and A. Wiese, “Faster Approximation Schemes for the Two-dimensional Knapsack Problem,” in Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017), Barcelona, Spain, 2017.
Export
BibTeX
@inproceedings{HeydrichW17, TITLE = {Faster Approximation Schemes for the Two-dimensional Knapsack Problem}, AUTHOR = {Heydrich, Sandy and Wiese, Andreas}, LANGUAGE = {eng}, ISBN = {978-1-61197-478-2}, DOI = {10.1137/1.9781611974782.6}, PUBLISHER = {SIAM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, EDITOR = {Klein, Philip N.}, PAGES = {79--98}, ADDRESS = {Barcelona, Spain}, }
Endnote
%0 Conference Proceedings %A Heydrich, Sandy %A Wiese, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Faster Approximation Schemes for the Two-dimensional Knapsack Problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-54AD-3 %R 10.1137/1.9781611974782.6 %D 2017 %B Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %Z date of event: 2017-01-16 - 2017-01-19 %C Barcelona, Spain %B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms %E Klein, Philip N. %P 79 - 98 %I SIAM %@ 978-1-61197-478-2
[194]
M. Hoefer and B. Kodric, “Combinatorial Secretary Problems with Ordinal Information,” 2017. [Online]. Available: http://arxiv.org/abs/1702.01290. (arXiv: 1702.01290)
Abstract
The secretary problem is a classic model for online decision making. Recently, combinatorial extensions such as matroid or matching secretary problems have become an important tool to study algorithmic problems in dynamic markets. Here the decision maker must know the numerical value of each arriving element, which can be a demanding informational assumption. In this paper, we initiate the study of combinatorial secretary problems with ordinal information, in which the decision maker only needs to be aware of a preference order consistent with the values of arrived elements. The goal is to design online algorithms with small competitive ratios. For a variety of combinatorial problems, such as bipartite matching, general packing LPs, and independent set with bounded local independence number, we design new algorithms that obtain constant competitive ratios. For the matroid secretary problem, we observe that many existing algorithms for special matroid structures maintain their competitive ratios even in the ordinal model. In these cases, the restriction to ordinal information does not represent any additional obstacle. Moreover, we show that ordinal variants of the submodular matroid secretary problems can be solved using algorithms for the linear versions by extending [Feldman and Zenklusen, 2015]. In contrast, we provide a lower bound of $\Omega(\sqrt{n}/(\log n))$ for algorithms that are oblivious to the matroid structure, where $n$ is the total number of elements. This contrasts an upper bound of $O(\log n)$ in the cardinal model, and it shows that the technique of thresholding is not sufficient for good algorithms in the ordinal model.
Export
BibTeX
@online{Hoefer_Kodric2017, TITLE = {Combinatorial Secretary Problems with Ordinal Information}, AUTHOR = {Hoefer, Martin and Kodric, Bojana}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1702.01290}, EPRINT = {1702.01290}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The secretary problem is a classic model for online decision making. Recently, combinatorial extensions such as matroid or matching secretary problems have become an important tool to study algorithmic problems in dynamic markets. Here the decision maker must know the numerical value of each arriving element, which can be a demanding informational assumption. In this paper, we initiate the study of combinatorial secretary problems with ordinal information, in which the decision maker only needs to be aware of a preference order consistent with the values of arrived elements. The goal is to design online algorithms with small competitive ratios. For a variety of combinatorial problems, such as bipartite matching, general packing LPs, and independent set with bounded local independence number, we design new algorithms that obtain constant competitive ratios. For the matroid secretary problem, we observe that many existing algorithms for special matroid structures maintain their competitive ratios even in the ordinal model. In these cases, the restriction to ordinal information does not represent any additional obstacle. Moreover, we show that ordinal variants of the submodular matroid secretary problems can be solved using algorithms for the linear versions by extending [Feldman and Zenklusen, 2015]. In contrast, we provide a lower bound of $\Omega(\sqrt{n}/(\log n))$ for algorithms that are oblivious to the matroid structure, where $n$ is the total number of elements. This contrasts an upper bound of $O(\log n)$ in the cardinal model, and it shows that the technique of thresholding is not sufficient for good algorithms in the ordinal model.}, }
Endnote
%0 Report %A Hoefer, Martin %A Kodric, Bojana %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Combinatorial Secretary Problems with Ordinal Information : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5C63-3 %U http://arxiv.org/abs/1702.01290 %D 2017 %X The secretary problem is a classic model for online decision making. Recently, combinatorial extensions such as matroid or matching secretary problems have become an important tool to study algorithmic problems in dynamic markets. Here the decision maker must know the numerical value of each arriving element, which can be a demanding informational assumption. In this paper, we initiate the study of combinatorial secretary problems with ordinal information, in which the decision maker only needs to be aware of a preference order consistent with the values of arrived elements. The goal is to design online algorithms with small competitive ratios. For a variety of combinatorial problems, such as bipartite matching, general packing LPs, and independent set with bounded local independence number, we design new algorithms that obtain constant competitive ratios. For the matroid secretary problem, we observe that many existing algorithms for special matroid structures maintain their competitive ratios even in the ordinal model. In these cases, the restriction to ordinal information does not represent any additional obstacle. Moreover, we show that ordinal variants of the submodular matroid secretary problems can be solved using algorithms for the linear versions by extending [Feldman and Zenklusen, 2015]. In contrast, we provide a lower bound of $\Omega(\sqrt{n}/(\log n))$ for algorithms that are oblivious to the matroid structure, where $n$ is the total number of elements. This contrasts an upper bound of $O(\log n)$ in the cardinal model, and it shows that the technique of thresholding is not sufficient for good algorithms in the ordinal model. %K Computer Science, Data Structures and Algorithms, cs.DS
[195]
M. Hoefer and L. Wagner, “Locally Stable Marriage with Strict Preferences,” SIAM Journal on Discrete Mathematics, vol. 31, no. 1, 2017.
Export
BibTeX
@article{HoeferWagner2017, TITLE = {Locally Stable Marriage with Strict Preferences}, AUTHOR = {Hoefer, Martin and Wagner, Lisa}, LANGUAGE = {eng}, ISSN = {0895-4801}, DOI = {10.1137/151003854}, PUBLISHER = {SIAM}, ADDRESS = {Philadelphia, Pa.}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {SIAM Journal on Discrete Mathematics}, VOLUME = {31}, NUMBER = {1}, PAGES = {283--316}, }
Endnote
%0 Journal Article %A Hoefer, Martin %A Wagner, Lisa %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Locally Stable Marriage with Strict Preferences : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-26B4-1 %R 10.1137/151003854 %7 2017-02-23 %D 2017 %J SIAM Journal on Discrete Mathematics %V 31 %N 1 %& 283 %P 283 - 316 %I SIAM %C Philadelphia, Pa. %@ false
[196]
M. Hoefer and B. Kodric, “Combinatorial Secretary Problems with Ordinal Information,” in 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), Warsaw, Poland, 2017.
Export
BibTeX
@inproceedings{Hoefer_ICALP2017, TITLE = {Combinatorial Secretary Problems with Ordinal Information}, AUTHOR = {Hoefer, Martin and Kodric, Bojana}, LANGUAGE = {eng}, ISBN = {978-3-95977-041-5}, URL = {urn:nbn:de:0030-drops-74594}, DOI = {10.4230/LIPIcs.ICALP.2017.133}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)}, EDITOR = {Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca}, PAGES = {1--14}, EID = {133}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {80}, ADDRESS = {Warsaw, Poland}, }
Endnote
%0 Conference Proceedings %A Hoefer, Martin %A Kodric, Bojana %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Combinatorial Secretary Problems with Ordinal Information : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A320-1 %R 10.4230/LIPIcs.ICALP.2017.133 %U urn:nbn:de:0030-drops-74594 %D 2017 %B 44th International Colloquium on Automata, Languages, and Programming %Z date of event: 2017-07-10 - 2017-07-14 %C Warsaw, Poland %B 44th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Indyk, Piotr; Kuhn, Fabian; Muscholl, Anca %P 1 - 14 %Z sequence number: 133 %I Schloss Dagstuhl %@ 978-3-95977-041-5 %B Leibniz International Proceedings in Informatics %N 80 %U http://drops.dagstuhl.de/opus/volltexte/2017/7459/http://drops.dagstuhl.de/doku/urheberrecht1.html
[197]
C. Ikenmeyer, B. Komarath, C. Lenzen, V. Lysikov, A. Mokhov, and K. Sreenivasaiah, “On the Complexity of Hazard-free Circuits,” 2017. [Online]. Available: http://arxiv.org/abs/1711.01904. (arXiv: 1711.01904)
Abstract
The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.
Export
BibTeX
@online{Ikenmeyer_Komarath2017, TITLE = {On the Complexity of Hazard-free Circuits}, AUTHOR = {Ikenmeyer, Christian and Komarath, Balagopal and Lenzen, Christoph and Lysikov, Vladimir and Mokhov, Andrey and Sreenivasaiah, Karteek}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1711.01904}, EPRINT = {1711.01904}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.}, }
Endnote
%0 Report %A Ikenmeyer, Christian %A Komarath, Balagopal %A Lenzen, Christoph %A Lysikov, Vladimir %A Mokhov, Andrey %A Sreenivasaiah, Karteek %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T On the Complexity of Hazard-free Circuits : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F22-4 %U http://arxiv.org/abs/1711.01904 %D 2017 %X The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems. %K Computer Science, Computational Complexity, cs.CC,
[198]
C. Ikenmeyer and J. M. Landsberg, “On the Complexity of the Permanent in Various Computational Models,” Journal of Pure and Applied Algebra, vol. 221, no. 12, 2017.
Export
BibTeX
@article{IL:17, TITLE = {On the Complexity of the Permanent in Various Computational Models}, AUTHOR = {Ikenmeyer, Christian and Landsberg, J. M.}, LANGUAGE = {eng}, ISSN = {0022-4049}, DOI = {10.1016/j.jpaa.2017.02.008}, PUBLISHER = {North-Holland}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Journal of Pure and Applied Algebra}, VOLUME = {221}, NUMBER = {12}, PAGES = {2911--2927}, }
Endnote
%0 Journal Article %A Ikenmeyer, Christian %A Landsberg, J. M. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On the Complexity of the Permanent in Various Computational Models : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4F23-D %R 10.1016/j.jpaa.2017.02.008 %7 2017-02-23 %D 2017 %J Journal of Pure and Applied Algebra %O J. Pure Appl. Algebra %V 221 %N 12 %& 2911 %P 2911 - 2927 %I North-Holland %C Amsterdam %@ false
[199]
C. Ikenmeyer, K. D. Mulmuley, and M. Walter, “On Vanishing of Kronecker Coefficients,” Computational Complexity, vol. 26, no. 4, 2017.
Export
BibTeX
@article{Ikenmeyer2017a, TITLE = {On vanishing of {Kronecker} coefficients}, AUTHOR = {Ikenmeyer, Christian and Mulmuley, Ketan D. and Walter, Michael}, LANGUAGE = {eng}, DOI = {10.1007/s00037-017-0158-y}, PUBLISHER = {Springer}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Computational Complexity}, VOLUME = {26}, NUMBER = {4}, PAGES = {949--992}, }
Endnote
%0 Journal Article %A Ikenmeyer, Christian %A Mulmuley, Ketan D. %A Walter, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T On Vanishing of Kronecker Coefficients : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-6278-F %R 10.1007/s00037-017-0158-y %7 2017 %D 2017 %J Computational Complexity %V 26 %N 4 %& 949 %P 949 - 992 %I Springer %C New York, NY
[200]
C. Ikenmeyer and G. Panova, “Rectangular Kronecker Coefficients and Plethysms in Geometric Complexity Theory,” Advances in Mathematics, vol. 319, 2017.
Export
BibTeX
@article{Ikenmeyer2017, TITLE = {Rectangular {Kronecker} Coefficients and Plethysms in Geometric Complexity Theory}, AUTHOR = {Ikenmeyer, Christian and Panova, Greta}, LANGUAGE = {eng}, ISSN = {0001-8708}, DOI = {10.1016/j.aim.2017.08.024}, PUBLISHER = {Academic Press}, ADDRESS = {Orlando, Fla.}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Advances in Mathematics}, VOLUME = {319}, PAGES = {40--66}, }
Endnote
%0 Journal Article %A Ikenmeyer, Christian %A Panova, Greta %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Rectangular Kronecker Coefficients and Plethysms in Geometric Complexity Theory : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-0E59-D %R 10.1016/j.aim.2017.08.024 %7 2017 %D 2017 %J Advances in Mathematics %V 319 %& 40 %P 40 - 66 %I Academic Press %C Orlando, Fla. %@ false
[201]
C. Ikenmeyer and V. Lysikov, “Strassen’s 2x2 Matrix Multiplication Algorithm: A Conceptual Perspective,” 2017. [Online]. Available: http://arxiv.org/abs/1708.08083. (arXiv: 1708.08083)
Abstract
Despite its importance, all proofs of the correctness of Strassen's famous 1969 algorithm to multiply two 2x2 matrices with only seven multiplications involve some more or less tedious calculations such as explicitly multiplying specific 2x2 matrices, expanding expressions to cancel terms with opposing signs, or expanding tensors over the standard basis. This is why the proof is nontrivial to memorize and why many presentations of the proof avoid showing all the details and leave a significant amount of verifications to the reader. In this note we give a short, self-contained, easy to memorize, and elegant proof of the existence of Strassen's algorithm that avoids these types of calculations. We achieve this by focusing on symmetries and algebraic properties. Our proof combines the classical theory of M-pairs, which was initiated by B\"uchi and Clausen in 1985, with recent work on the geometry of Strassen's algorithm by Chiantini, Ikenmeyer, Landsberg, and Ottaviani from 2016.
Export
BibTeX
@online{Ikenmeyer_Lysikov2017, TITLE = {Strassen's 2x2 Matrix Multiplication Algorithm: A Conceptual Perspective}, AUTHOR = {Ikenmeyer, Christian and Lysikov, Vladimir}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1708.08083}, EPRINT = {1708.08083}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Despite its importance, all proofs of the correctness of Strassen's famous 1969 algorithm to multiply two 2x2 matrices with only seven multiplications involve some more or less tedious calculations such as explicitly multiplying specific 2x2 matrices, expanding expressions to cancel terms with opposing signs, or expanding tensors over the standard basis. This is why the proof is nontrivial to memorize and why many presentations of the proof avoid showing all the details and leave a significant amount of verifications to the reader. In this note we give a short, self-contained, easy to memorize, and elegant proof of the existence of Strassen's algorithm that avoids these types of calculations. We achieve this by focusing on symmetries and algebraic properties. Our proof combines the classical theory of M-pairs, which was initiated by B\"uchi and Clausen in 1985, with recent work on the geometry of Strassen's algorithm by Chiantini, Ikenmeyer, Landsberg, and Ottaviani from 2016.}, }
Endnote
%0 Report %A Ikenmeyer, Christian %A Lysikov, Vladimir %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Strassen's 2x2 Matrix Multiplication Algorithm: A Conceptual Perspective : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F1F-9 %U http://arxiv.org/abs/1708.08083 %D 2017 %X Despite its importance, all proofs of the correctness of Strassen's famous 1969 algorithm to multiply two 2x2 matrices with only seven multiplications involve some more or less tedious calculations such as explicitly multiplying specific 2x2 matrices, expanding expressions to cancel terms with opposing signs, or expanding tensors over the standard basis. This is why the proof is nontrivial to memorize and why many presentations of the proof avoid showing all the details and leave a significant amount of verifications to the reader. In this note we give a short, self-contained, easy to memorize, and elegant proof of the existence of Strassen's algorithm that avoids these types of calculations. We achieve this by focusing on symmetries and algebraic properties. Our proof combines the classical theory of M-pairs, which was initiated by B\"uchi and Clausen in 1985, with recent work on the geometry of Strassen's algorithm by Chiantini, Ikenmeyer, Landsberg, and Ottaviani from 2016. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Symbolic Computation, cs.SC
[202]
G. Jindal, P. Kolev, R. Peng, and S. Sawlani, “Density Independent Algorithms for Sparsifying k-Step Random Walks,” 2017. [Online]. Available: http://arxiv.org/abs/1702.06110. (arXiv: 1702.06110)
Abstract
We give faster algorithms for producing sparse approximations of the transition matrices of $k$-step random walks on undirected, weighted graphs. These transition matrices also form graphs, and arise as intermediate objects in a variety of graph algorithms. Our improvements are based on a better understanding of processes that sample such walks, as well as tighter bounds on key weights underlying these sampling processes. On a graph with $n$ vertices and $m$ edges, our algorithm produces a graph with about $n\log{n}$ edges that approximates the $k$-step random walk graph in about $m + n \log^4{n}$ time. In order to obtain this runtime bound, we also revisit "density independent" algorithms for sparsifying graphs whose runtime overhead is expressed only in terms of the number of vertices.
Export
BibTeX
@online{DBLP:journals/corr/JindalKPS17, TITLE = {Density Independent Algorithms for Sparsifying $k$-Step Random Walks}, AUTHOR = {Jindal, Gorav and Kolev, Pavel and Peng, Richard and Sawlani, Saurabh}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1702.06110}, EPRINT = {1702.06110}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We give faster algorithms for producing sparse approximations of the transition matrices of $k$-step random walks on undirected, weighted graphs. These transition matrices also form graphs, and arise as intermediate objects in a variety of graph algorithms. Our improvements are based on a better understanding of processes that sample such walks, as well as tighter bounds on key weights underlying these sampling processes. On a graph with $n$ vertices and $m$ edges, our algorithm produces a graph with about $n\log{n}$ edges that approximates the $k$-step random walk graph in about $m + n \log^4{n}$ time. In order to obtain this runtime bound, we also revisit "density independent" algorithms for sparsifying graphs whose runtime overhead is expressed only in terms of the number of vertices.}, }
Endnote
%0 Report %A Jindal, Gorav %A Kolev, Pavel %A Peng, Richard %A Sawlani, Saurabh %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Density Independent Algorithms for Sparsifying k-Step Random Walks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-26A6-1 %U http://arxiv.org/abs/1702.06110 %D 2017 %X We give faster algorithms for producing sparse approximations of the transition matrices of $k$-step random walks on undirected, weighted graphs. These transition matrices also form graphs, and arise as intermediate objects in a variety of graph algorithms. Our improvements are based on a better understanding of processes that sample such walks, as well as tighter bounds on key weights underlying these sampling processes. On a graph with $n$ vertices and $m$ edges, our algorithm produces a graph with about $n\log{n}$ edges that approximates the $k$-step random walk graph in about $m + n \log^4{n}$ time. In order to obtain this runtime bound, we also revisit "density independent" algorithms for sparsifying graphs whose runtime overhead is expressed only in terms of the number of vertices. %K Computer Science, Data Structures and Algorithms, cs.DS
[203]
G. Jindal and M. Sagraloff, “Efficiently Computing Real Roots of Sparse Polynomials,” in ISSAC’17, International Symposium on Symbolic and Algebraic Computation, Kaiserslautern, Germany, 2017.
Export
BibTeX
@inproceedings{JindalISSAC2017, TITLE = {Efficiently Computing Real Roots of Sparse Polynomials}, AUTHOR = {Jindal, Gorav and Sagraloff, Michael}, LANGUAGE = {eng}, ISBN = {978-1-4503-5064-8}, DOI = {10.1145/3087604.3087652}, PUBLISHER = {ACM}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {ISSAC{\textquoteright}17, International Symposium on Symbolic and Algebraic Computation}, PAGES = {229--236}, ADDRESS = {Kaiserslautern, Germany}, }
Endnote
%0 Conference Proceedings %A Jindal, Gorav %A Sagraloff, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Efficiently Computing Real Roots of Sparse Polynomials : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-FBA6-4 %R 10.1145/3087604.3087652 %D 2017 %B International Symposium on Symbolic and Algebraic Computation %Z date of event: 2017-07-25 - 2017-07-28 %C Kaiserslautern, Germany %B ISSAC&#8217;17 %P 229 - 236 %I ACM %@ 978-1-4503-5064-8
[204]
G. Jindal and M. Sagraloff, “Efficiently Computing Real Roots of Sparse Polynomials,” 2017. [Online]. Available: http://arxiv.org/abs/1704.06979. (arXiv: 1704.06979)
Abstract
We propose an efficient algorithm to compute the real roots of a sparse polynomial $f\in\mathbb{R}[x]$ having $k$ non-zero real-valued coefficients. It is assumed that arbitrarily good approximations of the non-zero coefficients are given by means of a coefficient oracle. For a given positive integer $L$, our algorithm returns disjoint disks $\Delta_{1},\ldots,\Delta_{s}\subset\mathbb{C}$, with $s<2k$, centered at the real axis and of radius less than $2^{-L}$ together with positive integers $\mu_{1},\ldots,\mu_{s}$ such that each disk $\Delta_{i}$ contains exactly $\mu_{i}$ roots of $f$ counted with multiplicity. In addition, it is ensured that each real root of $f$ is contained in one of the disks. If $f$ has only simple real roots, our algorithm can also be used to isolate all real roots. The bit complexity of our algorithm is polynomial in $k$ and $\log n$, and near-linear in $L$ and $\tau$, where $2^{-\tau}$ and $2^{\tau}$ constitute lower and upper bounds on the absolute values of the non-zero coefficients of $f$, and $n$ is the degree of $f$. For root isolation, the bit complexity is polynomial in $k$ and $\log n$, and near-linear in $\tau$ and $\log\sigma^{-1}$, where $\sigma$ denotes the separation of the real roots.
Export
BibTeX
@online{DBLP:journals/corr/JindalS17, TITLE = {Efficiently Computing Real Roots of Sparse Polynomials}, AUTHOR = {Jindal, Gorav and Sagraloff, Michael}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1704.06979}, EPRINT = {1704.06979}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We propose an efficient algorithm to compute the real roots of a sparse polynomial $f\in\mathbb{R}[x]$ having $k$ non-zero real-valued coefficients. It is assumed that arbitrarily good approximations of the non-zero coefficients are given by means of a coefficient oracle. For a given positive integer $L$, our algorithm returns disjoint disks $\Delta_{1},\ldots,\Delta_{s}\subset\mathbb{C}$, with $s<2k$, centered at the real axis and of radius less than $2^{-L}$ together with positive integers $\mu_{1},\ldots,\mu_{s}$ such that each disk $\Delta_{i}$ contains exactly $\mu_{i}$ roots of $f$ counted with multiplicity. In addition, it is ensured that each real root of $f$ is contained in one of the disks. If $f$ has only simple real roots, our algorithm can also be used to isolate all real roots. The bit complexity of our algorithm is polynomial in $k$ and $\log n$, and near-linear in $L$ and $\tau$, where $2^{-\tau}$ and $2^{\tau}$ constitute lower and upper bounds on the absolute values of the non-zero coefficients of $f$, and $n$ is the degree of $f$. For root isolation, the bit complexity is polynomial in $k$ and $\log n$, and near-linear in $\tau$ and $\log\sigma^{-1}$, where $\sigma$ denotes the separation of the real roots.}, }
Endnote
%0 Report %A Jindal, Gorav %A Sagraloff, Michael %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Efficiently Computing Real Roots of Sparse Polynomials : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8AD1-7 %U http://arxiv.org/abs/1704.06979 %D 2017 %X We propose an efficient algorithm to compute the real roots of a sparse polynomial $f\in\mathbb{R}[x]$ having $k$ non-zero real-valued coefficients. It is assumed that arbitrarily good approximations of the non-zero coefficients are given by means of a coefficient oracle. For a given positive integer $L$, our algorithm returns disjoint disks $\Delta_{1},\ldots,\Delta_{s}\subset\mathbb{C}$, with $s<2k$, centered at the real axis and of radius less than $2^{-L}$ together with positive integers $\mu_{1},\ldots,\mu_{s}$ such that each disk $\Delta_{i}$ contains exactly $\mu_{i}$ roots of $f$ counted with multiplicity. In addition, it is ensured that each real root of $f$ is contained in one of the disks. If $f$ has only simple real roots, our algorithm can also be used to isolate all real roots. The bit complexity of our algorithm is polynomial in $k$ and $\log n$, and near-linear in $L$ and $\tau$, where $2^{-\tau}$ and $2^{\tau}$ constitute lower and upper bounds on the absolute values of the non-zero coefficients of $f$, and $n$ is the degree of $f$. For root isolation, the bit complexity is polynomial in $k$ and $\log n$, and near-linear in $\tau$ and $\log\sigma^{-1}$, where $\sigma$ denotes the separation of the real roots. %K Computer Science, Symbolic Computation, cs.SC
[205]
G. Jindal, P. Kolev, R. Peng, and S. Sawlani, “Density Independent Algorithms for Sparsifying k-Step Random Walks,” in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017), Berkeley, CA, USA, 2017.
Export
BibTeX
@inproceedings{Jindal_APPROXRANDOM17, TITLE = {Density Independent Algorithms for Sparsifying $k$-Step Random Walks}, AUTHOR = {Jindal, Gorav and Kolev, Pavel and Peng, Richard and Sawlani, Saurabh}, LANGUAGE = {eng}, ISBN = {978-3-95977-044-6}, URL = {urn:nbn:de:0030-drops-75638}, DOI = {10.4230/LIPIcs.APPROX-RANDOM.2017.14}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)}, EDITOR = {Jansen, Klaus and Rolim, Jos{\'e} D. P. and Williamson, David P. and Vempala, Santosh S.}, PAGES = {1--17}, EID = {14}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {81}, ADDRESS = {Berkeley, CA, USA}, }
Endnote
%0 Conference Proceedings %A Jindal, Gorav %A Kolev, Pavel %A Peng, Richard %A Sawlani, Saurabh %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Density Independent Algorithms for Sparsifying k-Step Random Walks : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A92E-D %R 10.4230/LIPIcs.APPROX-RANDOM.2017.14 %U urn:nbn:de:0030-drops-75638 %D 2017 %B 20th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems / 21st International Workshop on Randomization and Computation %Z date of event: 2017-08-16 - 2017-08-18 %C Berkeley, CA, USA %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %E Jansen, Klaus; Rolim, Jos&#233; D. P.; Williamson, David P.; Vempala, Santosh S. %P 1 - 17 %Z sequence number: 14 %I Schloss Dagstuhl %@ 978-3-95977-044-6 %B Leibniz International Proceedings in Informatics %N 81 %U http://drops.dagstuhl.de/opus/volltexte/2017/7563/http://drops.dagstuhl.de/doku/urheberrecht1.html
[206]
A. Karrenbauer, R. Becker, C. Scholl, and B. Becker, “From DQBF to QBF by Dependency Elimination,” in Theory and Applications of Satisfiability Testing -- SAT 2017, Melbourne, Australia, 2017.
Export
BibTeX
@inproceedings{KarrenbauerSAT2017, TITLE = {From {DQBF} to {QBF} by Dependency Elimination}, AUTHOR = {Karrenbauer, Andreas and Becker, Ruben and Scholl, Christoph and Becker, Bernd}, LANGUAGE = {eng}, ISBN = {978-3-319-66262-6}, DOI = {10.1007/978-3-319-66263-3_21}, PUBLISHER = {Springer}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Theory and Applications of Satisfiability Testing -- SAT 2017}, EDITOR = {Gaspers, Serge and Walsh, Toby}, PAGES = {326--343}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10491}, ADDRESS = {Melbourne, Australia}, }
Endnote
%0 Conference Proceedings %A Karrenbauer, Andreas %A Becker, Ruben %A Scholl, Christoph %A Becker, Bernd %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T From DQBF to QBF by Dependency Elimination : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-FBAC-7 %R 10.1007/978-3-319-66263-3_21 %D 2017 %B 20th International Conference on Theory and Applications of Satisfiability Testing %Z date of event: 2017-08-28 - 2017-09-01 %C Melbourne, Australia %B Theory and Applications of Satisfiability Testing -- SAT 2017 %E Gaspers, Serge; Walsh, Toby %P 326 - 343 %I Springer %@ 978-3-319-66262-6 %B Lecture Notes in Computer Science %N 10491
[207]
T. Kesselheim and B. Kodric, “Price of Anarchy for Mechanisms with Risk-Averse Agents,” in 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), Warsaw, Poland, 2017.
Export
BibTeX
@inproceedings{Kesselheim_ICALP2017, TITLE = {Price of Anarchy for Mechanisms with Risk-Averse Agents}, AUTHOR = {Kesselheim, Thomas and Kodric, Bojana}, LANGUAGE = {eng}, ISBN = {978-3-95977-041-5}, URL = {urn:nbn:de:0030-drops-91599}, DOI = {10.4230/LIPIcs.ICALP.2018.155}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)}, EDITOR = {Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca}, PAGES = {1--14}, EID = {155}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {80}, ADDRESS = {Warsaw, Poland}, }
Endnote
%0 Conference Proceedings %A Kesselheim, Thomas %A Kodric, Bojana %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Price of Anarchy for Mechanisms with Risk-Averse Agents : %G eng %U http://hdl.handle.net/21.11116/0000-0002-A32D-4 %R 10.4230/LIPIcs.ICALP.2018.155 %U urn:nbn:de:0030-drops-91599 %D 2017 %B 44th International Colloquium on Automata, Languages, and Programming %Z date of event: 2017-07-10 - 2017-07-14 %C Warsaw, Poland %B 44th International Colloquium on Automata, Languages, and Programming %E Chatzigiannakis, Ioannis; Indyk, Piotr; Kuhn, Fabian; Muscholl, Anca %P 1 - 14 %Z sequence number: 155 %I Schloss Dagstuhl %@ 978-3-95977-041-5 %B Leibniz International Proceedings in Informatics %N 80 %U http://drops.dagstuhl.de/opus/volltexte/2018/9159/http://drops.dagstuhl.de/doku/urheberrecht1.html
[208]
A. Kinali, “Fractional Brownian Motion and its Application in the Simulation of Noise in Atomic Clocks,” in Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017), Besançon, France, 2017.
Export
BibTeX
@inproceedings{Kinali_EFTF/IFCS2017a, TITLE = {Fractional {Brownian} Motion and its Application in the Simulation of Noise in Atomic Clocks}, AUTHOR = {Kinali, Attila}, LANGUAGE = {eng}, ISBN = {978-1-5386-2916-1}, DOI = {10.1109/FCS.2017.8088906}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017)}, PAGES = {408--409}, ADDRESS = {Besan{\c c}on, France}, }
Endnote
%0 Conference Proceedings %A Kinali, Attila %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fractional Brownian Motion and its Application in the Simulation of Noise in Atomic Clocks : %G eng %U http://hdl.handle.net/21.11116/0000-0001-94BF-1 %R 10.1109/FCS.2017.8088906 %D 2017 %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %Z date of event: 2017-07-09 - 2017-07-13 %C Besan&#231;on, France %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %P 408 - 409 %I IEEE %@ 978-1-5386-2916-1
[209]
A. Kinali, “The Use of Fault-tolerant Clock Synchronization Algorithms for Time Scales,” in Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017), Besançon, France, 2017.
Export
BibTeX
@inproceedings{Kinali_EFTF/IFCS2017b, TITLE = {The Use of Fault-tolerant Clock Synchronization Algorithms for Time Scales}, AUTHOR = {Kinali, Attila}, LANGUAGE = {eng}, ISBN = {978-1-5386-2916-1}, DOI = {10.1109/FCS.2017.8088795}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium (EFTF/IFC 2017)}, PAGES = {38--41}, ADDRESS = {Besan{\c c}on, France}, }
Endnote
%0 Conference Proceedings %A Kinali, Attila %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society %T The Use of Fault-tolerant Clock Synchronization Algorithms for Time Scales : %G eng %U http://hdl.handle.net/21.11116/0000-0001-94BD-3 %R 10.1109/FCS.2017.8088795 %D 2017 %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %Z date of event: 2017-07-09 - 2017-07-13 %C Besan&#231;on, France %B Joint Conference of the European Frequency and Time Forum and IEEE International Frequency Control Symposium %P 38 - 41 %I IEEE %@ 978-1-5386-2916-1
[210]
B. Kodric, “Incentives in Dynamic Markets,” Universität des Saarlandes, Saarbrücken, 2017.
Export
BibTeX
@phdthesis{Kodric_PhD2018, TITLE = {Incentives in Dynamic Markets}, AUTHOR = {Kodric, Bojana}, LANGUAGE = {eng}, URL = {urn:nbn:de:bsz:291-scidok-ds-273509}, DOI = {10.22028/D291-27350}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, }
Endnote
%0 Thesis %A Kodric, Bojana %Y Hoefer, Martin %A referee: Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Incentives in Dynamic Markets : %G eng %U http://hdl.handle.net/21.11116/0000-0002-5C1C-9 %R 10.22028/D291-27350 %U urn:nbn:de:bsz:291-scidok-ds-273509 %F OTHER: hdl:20.500.11880/27173 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2017 %P VIII, 96 p. %V phd %9 phd %U http://dx.doi.org/10.22028/D291-27350
[211]
C. Lenzen and R. Levi, “A Local Algorithm for the Sparse Spanning Graph Problem,” 2017. [Online]. Available: http://arxiv.org/abs/1703.05418. (arXiv: 1703.05418)
Abstract
Constructing a sparse \emph{spanning subgraph} is a fundamental primitive in graph theory. In this paper, we study this problem in the Centralized Local model, where the goal is to decide whether an edge is part of the spanning subgraph by examining only a small part of the input; yet, answers must be globally consistent and independent of prior queries. Unfortunately, maximally sparse spanning subgraphs, i.e., spanning trees, cannot be constructed efficiently in this model. Therefore, we settle for a spanning subgraph containing at most $(1+\varepsilon)n$ edges (where $n$ is the number of vertices and $\varepsilon$ is a given approximation/sparsity parameter). We achieve query complexity of $\tilde{O}(poly(\Delta/\varepsilon)n^{2/3})$,\footnote{$\tilde{O}$-notation hides polylogarithmic factors in $n$.} where $\Delta$ is the maximum degree of the input graph. Our algorithm is the first to do so on arbitrary graphs. Moreover, we achieve the additional property that our algorithm outputs a \emph{spanner,} i.e., distances are approximately preserved. With high probability, for each deleted edge there is a path of $O(poly(\Delta/\varepsilon)\log^2 n)$ hops in the output that connects its endpoints.
Export
BibTeX
@online{DBLP:journals/corr/LenzenL17, TITLE = {A Local Algorithm for the Sparse Spanning Graph Problem}, AUTHOR = {Lenzen, Christoph and Levi, Reut}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1703.05418}, EPRINT = {1703.05418}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Constructing a sparse \emph{spanning subgraph} is a fundamental primitive in graph theory. In this paper, we study this problem in the Centralized Local model, where the goal is to decide whether an edge is part of the spanning subgraph by examining only a small part of the input; yet, answers must be globally consistent and independent of prior queries. Unfortunately, maximally sparse spanning subgraphs, i.e., spanning trees, cannot be constructed efficiently in this model. Therefore, we settle for a spanning subgraph containing at most $(1+\varepsilon)n$ edges (where $n$ is the number of vertices and $\varepsilon$ is a given approximation/sparsity parameter). We achieve query complexity of $\tilde{O}(poly(\Delta/\varepsilon)n^{2/3})$,\footnote{$\tilde{O}$-notation hides polylogarithmic factors in $n$.} where $\Delta$ is the maximum degree of the input graph. Our algorithm is the first to do so on arbitrary graphs. Moreover, we achieve the additional property that our algorithm outputs a \emph{spanner,} i.e., distances are approximately preserved. With high probability, for each deleted edge there is a path of $O(poly(\Delta/\varepsilon)\log^2 n)$ hops in the output that connects its endpoints.}, }
Endnote
%0 Report %A Lenzen, Christoph %A Levi, Reut %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Local Algorithm for the Sparse Spanning Graph Problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8AB0-4 %U http://arxiv.org/abs/1703.05418 %D 2017 %X Constructing a sparse \emph{spanning subgraph} is a fundamental primitive in graph theory. In this paper, we study this problem in the Centralized Local model, where the goal is to decide whether an edge is part of the spanning subgraph by examining only a small part of the input; yet, answers must be globally consistent and independent of prior queries. Unfortunately, maximally sparse spanning subgraphs, i.e., spanning trees, cannot be constructed efficiently in this model. Therefore, we settle for a spanning subgraph containing at most $(1+\varepsilon)n$ edges (where $n$ is the number of vertices and $\varepsilon$ is a given approximation/sparsity parameter). We achieve query complexity of $\tilde{O}(poly(\Delta/\varepsilon)n^{2/3})$,\footnote{$\tilde{O}$-notation hides polylogarithmic factors in $n$.} where $\Delta$ is the maximum degree of the input graph. Our algorithm is the first to do so on arbitrary graphs. Moreover, we achieve the additional property that our algorithm outputs a \emph{spanner,} i.e., distances are approximately preserved. With high probability, for each deleted edge there is a path of $O(poly(\Delta/\varepsilon)\log^2 n)$ hops in the output that connects its endpoints. %K Computer Science, Data Structures and Algorithms, cs.DS
[212]
C. Lenzen and R. Levi, “Brief Announcement: A Centralized Local Algorithm for the Sparse Spanning Graph Problem,” in 31 International Symposium on Distributed Computing (DISC 2017), Vienna, Austria, 2017.
Export
BibTeX
@inproceedings{Lenzen_DISC17, TITLE = {Brief Announcement: {A} Centralized Local Algorithm for the Sparse Spanning Graph Problem}, AUTHOR = {Lenzen, Christoph and Levi, Reut}, LANGUAGE = {eng}, ISBN = {978-3-95977-053-8}, URL = {urn:nbn:de:0030-drops-80064}, DOI = {10.4230/LIPIcs.DISC.2017.57}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {31 International Symposium on Distributed Computing (DISC 2017)}, EDITOR = {Richa, Andr{\'e}a W.}, PAGES = {1--3}, EID = {57}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {91}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A Lenzen, Christoph %A Levi, Reut %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Brief Announcement: A Centralized Local Algorithm for the Sparse Spanning Graph Problem : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F28-E %U urn:nbn:de:0030-drops-80064 %R 10.4230/LIPIcs.DISC.2017.57 %D 2017 %B 31st International Symposium on Distributed Computing %Z date of event: 2017-10-16 - 2017-10-20 %C Vienna, Austria %B 31 International Symposium on Distributed Computing %E Richa, Andr&#233;a W. %P 1 - 3 %Z sequence number: 57 %I Schloss Dagstuhl %@ 978-3-95977-053-8 %B Leibniz International Proceedings in Informatics %N 91 %U http://drops.dagstuhl.de/opus/volltexte/2017/8006/http://drops.dagstuhl.de/doku/urheberrecht1.html
[213]
C. Lenzen and J. Rybicki, “Efficient Counting with Optimal Resilience,” SIAM Journal on Computing, vol. 46, no. 4, 2017.
Export
BibTeX
@article{LenzenRybicki2017, TITLE = {Efficient Counting with Optimal Resilience}, AUTHOR = {Lenzen, Christoph and Rybicki, Joel}, LANGUAGE = {eng}, ISSN = {0097-5397}, DOI = {10.1137/16M107877X}, PUBLISHER = {Society for Industrial and Applied Mathematics.}, ADDRESS = {Philadelphia, PA}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {SIAM Journal on Computing}, VOLUME = {46}, NUMBER = {4}, PAGES = {1473--1500}, }
Endnote
%0 Journal Article %A Lenzen, Christoph %A Rybicki, Joel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Efficient Counting with Optimal Resilience : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-EE93-6 %R 10.1137/16M107877X %7 2017 %D 2017 %J SIAM Journal on Computing %V 46 %N 4 %& 1473 %P 1473 - 1500 %I Society for Industrial and Applied Mathematics. %C Philadelphia, PA %@ false
[214]
C. Lenzen and M. Medina, “Robust Routing Made Easy,” in Stabilization, Safety, and Security of Distributed Systems (SSS 2017), Boston, MA, USA, 2017.
Export
BibTeX
@inproceedings{LenzenSSS2017, TITLE = {Robust Routing Made Easy}, AUTHOR = {Lenzen, Christoph and Medina, Moti}, LANGUAGE = {eng}, ISBN = {978-3-319-69083-4}, DOI = {10.1007/978-3-319-69084-1_13}, PUBLISHER = {Springer}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {Stabilization, Safety, and Security of Distributed Systems (SSS 2017)}, EDITOR = {Spirakis, Paul and Tsigas, Philippas}, PAGES = {187--202}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10616}, ADDRESS = {Boston, MA, USA}, }
Endnote
%0 Conference Proceedings %A Lenzen, Christoph %A Medina, Moti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Robust Routing Made Easy : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F35-F %R 10.1007/978-3-319-69084-1_13 %D 2017 %B 19th International Symposium on Stabilization, Safety, and Security of Distributed System %Z date of event: 2017-11-05 - 2017-11-08 %C Boston, MA, USA %B Stabilization, Safety, and Security of Distributed Systems %E Spirakis, Paul; Tsigas, Philippas %P 187 - 202 %I Springer %@ 978-3-319-69083-4 %B Lecture Notes in Computer Science %N 10616
[215]
C. Lenzen and M. Medina, “Robust Routing Made Easy,” 2017. [Online]. Available: http://arxiv.org/abs/1705.04042. (arXiv: 1705.04042)
Abstract
Designing routing schemes is a multidimensional and complex task that depends on the objective function, the computational model (centralized vs. distributed), and the amount of uncertainty (online vs. offline). Nevertheless, there are quite a few well-studied general techniques, for a large variety of network problems. In contrast, in our view, practical techniques for designing robust routing schemes are scarce; while fault-tolerance has been studied from a number of angles, existing approaches are concerned with dealing with faults after the fact by rerouting, self-healing, or similar techniques. We argue that this comes at a high burden for the designer, as in such a system any algorithm must account for the effects of faults on communication. With the goal of initiating efforts towards addressing this issue, we showcase simple and generic transformations that can be used as a blackbox to increase resilience against (independently distributed) faults. Given a network and a routing scheme, we determine a reinforced network and corresponding routing scheme that faithfully preserves the specification and behavior of the original scheme. We show that reasonably small constant overheads in terms of size of the new network compared to the old are sufficient for substantially relaxing the reliability requirements on individual components. The main message in this paper is that the task of designing a robust routing scheme can be decoupled into (i) designing a routing scheme that meets the specification in a fault-free environment, (ii) ensuring that nodes correspond to fault-containment regions, i.e., fail (approximately) independently, and (iii) applying our transformation to obtain a reinforced network and a robust routing scheme that is fault-tolerant.
Export
BibTeX
@online{DBLP:journals/corr/LenzenM17, TITLE = {Robust Routing Made Easy}, AUTHOR = {Lenzen, Christoph and Medina, Moti}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1705.04042}, EPRINT = {1705.04042}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Designing routing schemes is a multidimensional and complex task that depends on the objective function, the computational model (centralized vs. distributed), and the amount of uncertainty (online vs. offline). Nevertheless, there are quite a few well-studied general techniques, for a large variety of network problems. In contrast, in our view, practical techniques for designing robust routing schemes are scarce; while fault-tolerance has been studied from a number of angles, existing approaches are concerned with dealing with faults after the fact by rerouting, self-healing, or similar techniques. We argue that this comes at a high burden for the designer, as in such a system any algorithm must account for the effects of faults on communication. With the goal of initiating efforts towards addressing this issue, we showcase simple and generic transformations that can be used as a blackbox to increase resilience against (independently distributed) faults. Given a network and a routing scheme, we determine a reinforced network and corresponding routing scheme that faithfully preserves the specification and behavior of the original scheme. We show that reasonably small constant overheads in terms of size of the new network compared to the old are sufficient for substantially relaxing the reliability requirements on individual components. The main message in this paper is that the task of designing a robust routing scheme can be decoupled into (i) designing a routing scheme that meets the specification in a fault-free environment, (ii) ensuring that nodes correspond to fault-containment regions, i.e., fail (approximately) independently, and (iii) applying our transformation to obtain a reinforced network and a robust routing scheme that is fault-tolerant.}, }
Endnote
%0 Report %A Lenzen, Christoph %A Medina, Moti %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Robust Routing Made Easy : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8AAD-E %U http://arxiv.org/abs/1705.04042 %D 2017 %X Designing routing schemes is a multidimensional and complex task that depends on the objective function, the computational model (centralized vs. distributed), and the amount of uncertainty (online vs. offline). Nevertheless, there are quite a few well-studied general techniques, for a large variety of network problems. In contrast, in our view, practical techniques for designing robust routing schemes are scarce; while fault-tolerance has been studied from a number of angles, existing approaches are concerned with dealing with faults after the fact by rerouting, self-healing, or similar techniques. We argue that this comes at a high burden for the designer, as in such a system any algorithm must account for the effects of faults on communication. With the goal of initiating efforts towards addressing this issue, we showcase simple and generic transformations that can be used as a blackbox to increase resilience against (independently distributed) faults. Given a network and a routing scheme, we determine a reinforced network and corresponding routing scheme that faithfully preserves the specification and behavior of the original scheme. We show that reasonably small constant overheads in terms of size of the new network compared to the old are sufficient for substantially relaxing the reliability requirements on individual components. The main message in this paper is that the task of designing a robust routing scheme can be decoupled into (i) designing a routing scheme that meets the specification in a fault-free environment, (ii) ensuring that nodes correspond to fault-containment regions, i.e., fail (approximately) independently, and (iii) applying our transformation to obtain a reinforced network and a robust routing scheme that is fault-tolerant. %K Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC
[216]
C. Lenzen, N. A. Lynch, C. Newport, and T. Radeva, “Searching without Communicating: Tradeoffs between Performance and Selection Complexity,” Distributed Computing, vol. 30, no. 3, 2017.
Export
BibTeX
@article{DBLP:journals/dc/LenzenLNR17, TITLE = {Searching without Communicating: {T}radeoffs between Performance and Selection Complexity}, AUTHOR = {Lenzen, Christoph and Lynch, Nancy A. and Newport, Calvin and Radeva, Tsvetomira}, LANGUAGE = {eng}, ISSN = {0178-2770}, DOI = {10.1007/s00446-016-0283-x}, PUBLISHER = {Springer International}, ADDRESS = {Berlin}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Distributed Computing}, VOLUME = {30}, NUMBER = {3}, PAGES = {169--191}, }
Endnote
%0 Journal Article %A Lenzen, Christoph %A Lynch, Nancy A. %A Newport, Calvin %A Radeva, Tsvetomira %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Searching without Communicating: Tradeoffs between Performance and Selection Complexity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8AA7-9 %R 10.1007/s00446-016-0283-x %7 2016-09-19 %D 2017 %J Distributed Computing %V 30 %N 3 %& 169 %P 169 - 191 %I Springer International %C Berlin %@ false
[217]
C. Lenzen and J. Rybicki, “Self-stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus,” 2017. [Online]. Available: http://arxiv.org/abs/1705.06173. (arXiv: 1705.06173)
Abstract
We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the system may be arbitrary, and there can be up to $f<n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem: - In the deterministic setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $\Theta(f \log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $\Theta(\log f)$ while retaining the same stabilisation time. - In the randomised setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $\log^{O(1)} f$ while each node broadcasts $\log^{O(1)} f$ bits per time unit. These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.
Export
BibTeX
@online{DBLP:journals/corr/LenzenR17, TITLE = {Self-stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus}, AUTHOR = {Lenzen, Christoph and Rybicki, Joel}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1705.06173}, EPRINT = {1705.06173}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the system may be arbitrary, and there can be up to $f<n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem: -- In the deterministic setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $\Theta(f \log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $\Theta(\log f)$ while retaining the same stabilisation time. -- In the randomised setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $\log^{O(1)} f$ while each node broadcasts $\log^{O(1)} f$ bits per time unit. These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.}, }
Endnote
%0 Report %A Lenzen, Christoph %A Rybicki, Joel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Self-stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus : %O Self-stabilising {B}yzantine Clock Synchronisation is Almost as Easy as Consensus %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-8AAA-3 %U http://arxiv.org/abs/1705.06173 %D 2017 %X We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the system may be arbitrary, and there can be up to $f<n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem: - In the deterministic setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $\Theta(f \log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $\Theta(\log f)$ while retaining the same stabilisation time. - In the randomised setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $\log^{O(1)} f$ while each node broadcasts $\log^{O(1)} f$ bits per time unit. These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine. %K Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC
[218]
C. Lenzen and J. Rybicki, “Self-Stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus,” in 31 International Symposium on Distributed Computing (DISC 2017), Vienna, Austria, 2017.
Export
BibTeX
@inproceedings{Lenzen_DISC17b, TITLE = {Self-Stabilising {B}yzantine Clock Synchronisation is Almost as Easy as Consensus}, AUTHOR = {Lenzen, Christoph and Rybicki, Joel}, LANGUAGE = {eng}, ISBN = {978-3-95977-053-8}, URL = {urn:nbn:de:0030-drops-79914}, DOI = {10.4230/LIPIcs.DISC.2017.32}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {31 International Symposium on Distributed Computing (DISC 2017)}, EDITOR = {Richa, Andr{\'e}a W.}, PAGES = {1--15}, EID = {32}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {91}, ADDRESS = {Vienna, Austria}, }
Endnote
%0 Conference Proceedings %A Lenzen, Christoph %A Rybicki, Joel %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Self-Stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3F31-3 %U urn:nbn:de:0030-drops-79914 %R 10.4230/LIPIcs.DISC.2017.32 %D 2017 %B 31st International Symposium on Distributed Computing %Z date of event: 2017-10-16 - 2017-10-20 %C Vienna, Austria %B 31 International Symposium on Distributed Computing %E Richa, Andr&#233;a W. %P 1 - 15 %Z sequence number: 32 %I Schloss Dagstuhl %@ 978-3-95977-053-8 %B Leibniz International Proceedings in Informatics %N 91 %U http://drops.dagstuhl.de/opus/volltexte/2017/7991/http://drops.dagstuhl.de/doku/urheberrecht1.html
[219]
R. Levi, G. Moshkovitz, D. Ron, R. Rubinfeld, and A. Shapira, “Constructing Near Spanning Trees with Few Local Inspections,” Random Structures and Algorithms, vol. 50, 2017.
Export
BibTeX
@article{LeviMRRS15, TITLE = {Constructing Near Spanning Trees with Few Local Inspections}, AUTHOR = {Levi, Reut and Moshkovitz, Guy and Ron, Dana and Rubinfeld, Ronitt and Shapira, Asaf}, LANGUAGE = {eng}, ISSN = {1042-9832}, DOI = {10.1002/rsa.20652}, PUBLISHER = {Wiley}, ADDRESS = {New York, N.Y.}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Random Structures and Algorithms}, VOLUME = {50}, PAGES = {183--200}, }
Endnote
%0 Journal Article %A Levi, Reut %A Moshkovitz, Guy %A Ron, Dana %A Rubinfeld, Ronitt %A Shapira, Asaf %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Constructing Near Spanning Trees with Few Local Inspections : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-601B-C %R 10.1002/rsa.20652 %7 2016 %D 2017 %J Random Structures and Algorithms %V 50 %& 183 %P 183 - 200 %I Wiley %C New York, N.Y. %@ false
[220]
K. Mehlhorn, S. Näher, and P. Sanders, “Engineering DFS-Based Graph Algorithms,” 2017. [Online]. Available: http://arxiv.org/abs/1703.10023. (arXiv: 1703.10023)
Export
BibTeX
@online{MehlhornDFSarXiv2017, TITLE = {Engineering {DFS}-Based Graph Algorithms}, AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan and Sanders, Peter}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1703.10023}, EPRINT = {1703.10023}, EPRINTTYPE = {arXiv}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, }
Endnote
%0 Report %A Mehlhorn, Kurt %A N&#228;her, Stefan %A Sanders, Peter %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Engineering DFS-Based Graph Algorithms : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-4DAE-7 %U http://arxiv.org/abs/1703.10023 %D 2017
[221]
K. Mehlhorn, A. Neumann, and J. M. Schmidt, “Certifying 3-Edge-Connectivity,” Algorithmica, vol. 77, no. 2, 2017.
Export
BibTeX
@article{Mehlhorn_Neumann_Schmidt2017, TITLE = {Certifying 3-Edge-Connectivity}, AUTHOR = {Mehlhorn, Kurt and Neumann, Adrian and Schmidt, Jens M.}, LANGUAGE = {eng}, ISSN = {0178-4617}, DOI = {10.1007/s00453-015-0075-x}, PUBLISHER = {Springer}, ADDRESS = {New York, NY, USA}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Algorithmica}, VOLUME = {77}, NUMBER = {2}, PAGES = {309--335}, }
Endnote
%0 Journal Article %A Mehlhorn, Kurt %A Neumann, Adrian %A Schmidt, Jens M. %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Certifying 3-Edge-Connectivity : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6971-B %R 10.1007/s00453-015-0075-x %7 2015-09-22 %D 2017 %J Algorithmica %V 77 %N 2 %& 309 %P 309 - 335 %I Springer %C New York, NY, USA %@ false
[222]
M. Mnich and E. J. van Leeuwen, “Polynomial Kernels for Deletion to Classes of Acyclic Digraphs,” Discrete Optimization, vol. 25, 2017.
Export
BibTeX
@article{MnichLeeuwen2017, TITLE = {Polynomial Kernels for Deletion to Classes of Acyclic Digraphs}, AUTHOR = {Mnich, Matthias and van Leeuwen, Erik Jan}, LANGUAGE = {eng}, ISSN = {1572-5286}, DOI = {10.1016/j.disopt.2017.02.002}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Discrete Optimization}, VOLUME = {25}, PAGES = {48--76}, }
Endnote
%0 Journal Article %A Mnich, Matthias %A van Leeuwen, Erik Jan %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Polynomial Kernels for Deletion to Classes of Acyclic Digraphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002D-DDCA-F %R 10.1016/j.disopt.2017.02.002 %7 2017 %D 2017 %J Discrete Optimization %V 25 %& 48 %P 48 - 76 %I Elsevier %C Amsterdam %@ false
[223]
N. Mustafa, K. Dutta, and A. Ghosh, “Simple Proof of Optimal Epsilon Nets,” Combinatorica, vol. First Online, 2017.
Export
BibTeX
@article{mustafa:hal-01360452, TITLE = {Simple Proof of Optimal Epsilon Nets}, AUTHOR = {Mustafa, Nabil and Dutta, Kunal and Ghosh, Arijit}, LANGUAGE = {eng}, ISSN = {0209-9683}, DOI = {10.1007/s00493-017-3564-5}, PUBLISHER = {Springer}, ADDRESS = {Heidelberg}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, JOURNAL = {Combinatorica}, VOLUME = {First Online}, PAGES = {1--9}, }
Endnote
%0 Journal Article %A Mustafa, Nabil %A Dutta, Kunal %A Ghosh, Arijit %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Simple Proof of Optimal Epsilon Nets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-65CA-7 %R 10.1007/s00493-017-3564-5 %7 2017 %D 2017 %J Combinatorica %V First Online %& 1 %P 1 - 9 %I Springer %C Heidelberg %@ false
[224]
A. Oulasvirta, A. Feit, P. Lahteenlahti, and A. Karrenbauer, “Computational Support for Functionality Selection in Interaction Design,” ACM Transactions on Computer-Human Interaction, vol. 24, no. 5, 2017.
Export
BibTeX
@article{Oulasvirta2017, TITLE = {Computational Support for Functionality Selection in Interaction Design}, AUTHOR = {Oulasvirta, Antti and Feit, Anna and Lahteenlahti, Perttu and Karrenbauer, Andreas}, LANGUAGE = {eng}, DOI = {10.1145/3131608}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {ACM Transactions on Computer-Human Interaction}, VOLUME = {24}, NUMBER = {5}, EID = {34}, }
Endnote
%0 Journal Article %A Oulasvirta, Antti %A Feit, Anna %A Lahteenlahti, Perttu %A Karrenbauer, Andreas %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Computational Support for Functionality Selection in Interaction Design : %G eng %U http://hdl.handle.net/21.11116/0000-0000-2DE0-1 %R 10.1145/3131608 %7 2017 %D 2017 %J ACM Transactions on Computer-Human Interaction %O TOCHI %V 24 %N 5 %Z sequence number: 34 %I ACM %C New York, NY
[225]
R. B. Tan, E. J. van Leeuwen, and J. van Leeuwen, “Shortcutting Directed and Undirected Networks with a Degree Constraint,” Discrete Applied Mathematics, vol. 220, 2017.
Export
BibTeX
@article{TanDAM2017, TITLE = {Shortcutting Directed and Undirected Networks with a Degree Constraint}, AUTHOR = {Tan, Richard B. and van Leeuwen, Erik Jan and van Leeuwen, Jan}, LANGUAGE = {eng}, ISSN = {0166-218X}, DOI = {10.1016/j.dam.2016.12.016}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, JOURNAL = {Discrete Applied Mathematics}, VOLUME = {220}, PAGES = {91--117}, }
Endnote
%0 Journal Article %A Tan, Richard B. %A van Leeuwen, Erik Jan %A van Leeuwen, Jan %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Shortcutting Directed and Undirected Networks with a Degree Constraint : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-539D-F %R 10.1016/j.dam.2016.12.016 %7 2017 %D 2017 %J Discrete Applied Mathematics %V 220 %& 91 %P 91 - 117 %I Elsevier %C Amsterdam %@ false
[226]
G. Tarawneh, M. Függer, and C. Lenzen, “Metastability Tolerant Computing,” in 23rd IEEE International Symposium on Asynchronous Circuits and Systems, San Diego, CA, USA, 2017.
Export
BibTeX
@inproceedings{TarawnehASYNC2017, TITLE = {Metastability Tolerant Computing}, AUTHOR = {Tarawneh, Ghaith and F{\"u}gger, Matthias and Lenzen, Christoph}, LANGUAGE = {eng}, ISBN = {978-1-5386-2749-5}, DOI = {10.1109/ASYNC.2017.9}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {23rd IEEE International Symposium on Asynchronous Circuits and Systems}, PAGES = {25--32}, ADDRESS = {San Diego, CA, USA}, }
Endnote
%0 Conference Proceedings %A Tarawneh, Ghaith %A F&#252;gger, Matthias %A Lenzen, Christoph %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Metastability Tolerant Computing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-80CB-2 %R 10.1109/ASYNC.2017.9 %D 2017 %B 23rd IEEE International Symposium on Asynchronous Circuits and Systems %Z date of event: 2017-05-21 - 2017-05-24 %C San Diego, CA, USA %B 23rd IEEE International Symposium on Asynchronous Circuits and Systems %P 25 - 32 %I IEEE %@ 978-1-5386-2749-5
[227]
D. Ziegler, A. Abujabal, R. S. Roy, and G. Weikum, “Efficiency-aware Answering of Compositional Questions using Answer Type Prediction,” in The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017), Taipei, Taiwan, 2017.
Export
BibTeX
@inproceedings{ZieglerIJCNLP2017, TITLE = {Efficiency-aware Answering of Compositional Questions using Answer Type Prediction}, AUTHOR = {Ziegler, David and Abujabal, Abdalghani and Roy, Rishiraj Saha and Weikum, Gerhard}, LANGUAGE = {eng}, ISBN = {978-1-948087-01-8}, URL = {http://aclweb.org/anthology/I17-2038}, PUBLISHER = {Asian Federation of Natural Language Processing}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)}, PAGES = {222--227}, ADDRESS = {Taipei, Taiwan}, }
Endnote
%0 Conference Proceedings %A Ziegler, David %A Abujabal, Abdalghani %A Roy, Rishiraj Saha %A Weikum, Gerhard %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society Databases and Information Systems, MPI for Informatics, Max Planck Society %T Efficiency-aware Answering of Compositional Questions using Answer Type Prediction : %G eng %U http://hdl.handle.net/21.11116/0000-0000-3B5F-5 %U http://aclweb.org/anthology/I17-2038 %D 2017 %B 8th International Joint Conference on Natural Language Processing %Z date of event: 2017-11-27 - 2017-12-01 %C Taipei, Taiwan %B The 8th International Joint Conference on Natural Language Processing %P 222 - 227 %I Asian Federation of Natural Language Processing %@ 978-1-948087-01-8
2016
[228]
I. Abraham, S. Chechik, and S. Krinninger, “Fully Dynamic All-pairs Shortest Paths with Worst-case Update-time revisited,” 2016. [Online]. Available: http://arxiv.org/abs/1607.05132. (arXiv: 1607.05132)
Abstract
We revisit the classic problem of dynamically maintaining shortest paths between all pairs of nodes of a directed weighted graph. The allowed updates are insertions and deletions of nodes and their incident edges. We give worst-case guarantees on the time needed to process a single update (in contrast to related results, the update time is not amortized over a sequence of updates). Our main result is a simple randomized algorithm that for any parameter $c>1$ has a worst-case update time of $O(cn^{2+2/3} \log^{4/3}{n})$ and answers distance queries correctly with probability $1-1/n^c$, against an adaptive online adversary if the graph contains no negative cycle. The best deterministic algorithm is by Thorup [STOC 2005] with a worst-case update time of $\tilde O(n^{2+3/4})$ and assumes non-negative weights. This is the first improvement for this problem for more than a decade. Conceptually, our algorithm shows that randomization along with a more direct approach can provide better bounds.
Export
BibTeX
@online{Krinningerarxiv16, TITLE = {Fully Dynamic All-pairs Shortest Paths with Worst-case Update-time revisited}, AUTHOR = {Abraham, Ittai and Chechik, Shiri and Krinninger, Sebastian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1607.05132}, EPRINT = {1607.05132}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We revisit the classic problem of dynamically maintaining shortest paths between all pairs of nodes of a directed weighted graph. The allowed updates are insertions and deletions of nodes and their incident edges. We give worst-case guarantees on the time needed to process a single update (in contrast to related results, the update time is not amortized over a sequence of updates). Our main result is a simple randomized algorithm that for any parameter $c>1$ has a worst-case update time of $O(cn^{2+2/3} \log^{4/3}{n})$ and answers distance queries correctly with probability $1-1/n^c$, against an adaptive online adversary if the graph contains no negative cycle. The best deterministic algorithm is by Thorup [STOC 2005] with a worst-case update time of $\tilde O(n^{2+3/4})$ and assumes non-negative weights. This is the first improvement for this problem for more than a decade. Conceptually, our algorithm shows that randomization along with a more direct approach can provide better bounds.}, }
Endnote
%0 Report %A Abraham, Ittai %A Chechik, Shiri %A Krinninger, Sebastian %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fully Dynamic All-pairs Shortest Paths with Worst-case Update-time revisited : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-50F8-A %U http://arxiv.org/abs/1607.05132 %D 2016 %X We revisit the classic problem of dynamically maintaining shortest paths between all pairs of nodes of a directed weighted graph. The allowed updates are insertions and deletions of nodes and their incident edges. We give worst-case guarantees on the time needed to process a single update (in contrast to related results, the update time is not amortized over a sequence of updates). Our main result is a simple randomized algorithm that for any parameter $c>1$ has a worst-case update time of $O(cn^{2+2/3} \log^{4/3}{n})$ and answers distance queries correctly with probability $1-1/n^c$, against an adaptive online adversary if the graph contains no negative cycle. The best deterministic algorithm is by Thorup [STOC 2005] with a worst-case update time of $\tilde O(n^{2+3/4})$ and assumes non-negative weights. This is the first improvement for this problem for more than a decade. Conceptually, our algorithm shows that randomization along with a more direct approach can provide better bounds. %K Computer Science, Data Structures and Algorithms, cs.DS
[229]
I. Abraham, D. Durfee, I. Koutis, S. Krinninger, and R. Peng, “On Fully Dynamic Graph Sparsifiers,” in FOCS 2016, New Brunswick, NJ, USA, 2016.
Abstract
We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-spectral sparsifier with amortized update time $poly(\log{n}, \epsilon^{-1})$. Second, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-cut sparsifier with \emph{worst-case} update time $poly(\log{n}, \epsilon^{-1})$. Both sparsifiers have size $ n \cdot poly(\log{n}, \epsilon^{-1})$. Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a $(1 + \epsilon)$-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time $poly(\log{n}, \epsilon^{-1})$.
Export
BibTeX
@inproceedings{Abrahamdkkp2016, TITLE = {On Fully Dynamic Graph Sparsifiers}, AUTHOR = {Abraham, Ittai and Durfee, David and Koutis, Ioannis and Krinninger, Sebastian and Peng, Richard}, LANGUAGE = {eng}, ISBN = {978-1-5090-3933-3}, DOI = {10.1109/FOCS.2016.44}, PUBLISHER = {IEEE}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-spectral sparsifier with amortized update time $poly(\log{n}, \epsilon^{-1})$. Second, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-cut sparsifier with \emph{worst-case} update time $poly(\log{n}, \epsilon^{-1})$. Both sparsifiers have size $ n \cdot poly(\log{n}, \epsilon^{-1})$. Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a $(1 + \epsilon)$-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time $poly(\log{n}, \epsilon^{-1})$.}, BOOKTITLE = {FOCS 2016}, PAGES = {396--405}, ADDRESS = {New Brunswick, NJ, USA}, }
Endnote
%0 Conference Proceedings %A Abraham, Ittai %A Durfee, David %A Koutis, Ioannis %A Krinninger, Sebastian %A Peng, Richard %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Fully Dynamic Graph Sparsifiers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-52C6-A %R 10.1109/FOCS.2016.44 %D 2016 %B 57th Annual IEEE Symposium on Foundations of Computer Science %Z date of event: 2016-10-09 - 2016-10-11 %C New Brunswick, NJ, USA %X We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-spectral sparsifier with amortized update time $poly(\log{n}, \epsilon^{-1})$. Second, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-cut sparsifier with \emph{worst-case} update time $poly(\log{n}, \epsilon^{-1})$. Both sparsifiers have size $ n \cdot poly(\log{n}, \epsilon^{-1})$. Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a $(1 + \epsilon)$-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time $poly(\log{n}, \epsilon^{-1})$. %K Computer Science, Data Structures and Algorithms, cs.DS %B FOCS 2016 %P 396 - 405 %I IEEE %@ 978-1-5090-3933-3
[230]
I. Abraham, D. Durfee, I. Koutis, S. Krinninger, and R. Peng, “On Fully Dynamic Graph Sparsifiers,” 2016. [Online]. Available: http://arxiv.org/abs/1604.02094. (arXiv: 1604.02094)
Abstract
We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-spectral sparsifier with amortized update time $poly(\log{n}, \epsilon^{-1})$. Second, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-cut sparsifier with \emph{worst-case} update time $poly(\log{n}, \epsilon^{-1})$. Both sparsifiers have size $ n \cdot poly(\log{n}, \epsilon^{-1})$. Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a $(1 + \epsilon)$-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time $poly(\log{n}, \epsilon^{-1})$.
Export
BibTeX
@online{Abrahamdkkp16, TITLE = {On Fully Dynamic Graph Sparsifiers}, AUTHOR = {Abraham, Ittai and Durfee, David and Koutis, Ioannis and Krinninger, Sebastian and Peng, Richard}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1604.02094}, EPRINT = {1604.02094}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-spectral sparsifier with amortized update time $poly(\log{n}, \epsilon^{-1})$. Second, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-cut sparsifier with \emph{worst-case} update time $poly(\log{n}, \epsilon^{-1})$. Both sparsifiers have size $ n \cdot poly(\log{n}, \epsilon^{-1})$. Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a $(1 + \epsilon)$-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time $poly(\log{n}, \epsilon^{-1})$.}, }
Endnote
%0 Report %A Abraham, Ittai %A Durfee, David %A Koutis, Ioannis %A Krinninger, Sebastian %A Peng, Richard %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Fully Dynamic Graph Sparsifiers : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-510E-1 %U http://arxiv.org/abs/1604.02094 %D 2016 %X We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-spectral sparsifier with amortized update time $poly(\log{n}, \epsilon^{-1})$. Second, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-cut sparsifier with \emph{worst-case} update time $poly(\log{n}, \epsilon^{-1})$. Both sparsifiers have size $ n \cdot poly(\log{n}, \epsilon^{-1})$. Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a $(1 + \epsilon)$-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time $poly(\log{n}, \epsilon^{-1})$. %K Computer Science, Data Structures and Algorithms, cs.DS
[231]
H. Ackermann, P. Berenbrink, S. Fischer, and M. Hoefer, “Concurrent Imitation Dynamics in Congestion Games,” Distributed Computing, vol. 29, no. 2, 2016.
Export
BibTeX
@article{Ackermann2016, TITLE = {Concurrent Imitation Dynamics in Congestion Games}, AUTHOR = {Ackermann, Heiner and Berenbrink, Petra and Fischer, Simon and Hoefer, Martin}, LANGUAGE = {eng}, ISSN = {0178-2770}, DOI = {10.1007/s00446-014-0223-6}, PUBLISHER = {Springer International}, ADDRESS = {Berlin}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Distributed Computing}, VOLUME = {29}, NUMBER = {2}, PAGES = {105--125}, }
Endnote
%0 Journal Article %A Ackermann, Heiner %A Berenbrink, Petra %A Fischer, Simon %A Hoefer, Martin %+ External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Concurrent Imitation Dynamics in Congestion Games : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-C479-5 %R 10.1007/s00446-014-0223-6 %7 2014 %D 2016 %J Distributed Computing %V 29 %N 2 %& 105 %P 105 - 125 %I Springer International %C Berlin %@ false
[232]
A. Adamaszek, A. Antoniadis, and T. Mömke, “Airports and Railways: Facility Location Meets Network Design,” in 33rd International Symposium on Theoretical Aspects of Computer Science (STACS 2016), Orléans, France, 2016.
Export
BibTeX
@inproceedings{AdamaszekSTACS2016, TITLE = {Airports and Railways: {F}acility Location Meets Network Design}, AUTHOR = {Adamaszek, Anna and Antoniadis, Antonios and M{\"o}mke, Tobias}, LANGUAGE = {eng}, ISSN = {1868-896}, ISBN = {978-3-95977-001-9}, URL = {urn:nbn:de:0030-drops-57074}, DOI = {10.4230/LIPIcs.STACS.2016.6}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2016}, BOOKTITLE = {33rd International Symposium on Theoretical Aspects of Computer Science (STACS 2016)}, EDITOR = {Ollinger, Nicolas and Vollmer, Heribert}, PAGES = {1--14}, EID = {6}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {47}, ADDRESS = {Orl{\'e}ans, France}, }
Endnote
%0 Conference Proceedings %A Adamaszek, Anna %A Antoniadis, Antonios %A M&#246;mke, Tobias %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Airports and Railways: Facility Location Meets Network Design : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-4312-A %R 10.4230/LIPIcs.STACS.2016.6 %U urn:nbn:de:0030-drops-57074 %D 2016 %B 33rd International Symposium on Theoretical Aspects of Computer Science %Z date of event: 2016-02-17 - 2016-02-20 %C Orl&#233;ans, France %B 33rd International Symposium on Theoretical Aspects of Computer Science %E Ollinger, Nicolas; Vollmer, Heribert %P 1 - 14 %Z sequence number: 6 %I Schloss Dagstuhl %@ 978-3-95977-001-9 %B Leibniz International Proceedings in Informatics %N 47 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2016/5707/http://drops.dagstuhl.de/doku/urheberrecht1.html
[233]
A. Adamaszek, P. Chalermsook, A. Ene, and A. Wiese, “Submodular Unsplittable Flow on Trees,” in Integer Programming and Combinatorial Optimization (IPCO 2016), Liège, Belgium, 2016.
Export
BibTeX
@inproceedings{AdamaszekIPCO2016, TITLE = {Submodular Unsplittable Flow on Trees}, AUTHOR = {Adamaszek, Anna and Chalermsook, Parinya and Ene, Alina and Wiese, Andreas}, LANGUAGE = {eng}, ISBN = {978-3-319-33460-8}, DOI = {10.1007/978-3-319-33461-5_28}, PUBLISHER = {Springer}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Integer Programming and Combinatorial Optimization (IPCO 2016)}, EDITOR = {Louveaux, Quentin and Skutella, Martin}, PAGES = {337--349}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9682}, ADDRESS = {Li{\`e}ge, Belgium}, }
Endnote
%0 Conference Proceedings %A Adamaszek, Anna %A Chalermsook, Parinya %A Ene, Alina %A Wiese, Andreas %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Submodular Unsplittable Flow on Trees : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-0244-8 %R 10.1007/978-3-319-33461-5_28 %D 2016 %B 18th Conference on Integer Programming and Combinatorial Optimization %Z date of event: 2016-06-01 - 2016-06-03 %C Li&#232;ge, Belgium %B Integer Programming and Combinatorial Optimization %E Louveaux, Quentin; Skutella, Martin %P 337 - 349 %I Springer %@ 978-3-319-33460-8 %B Lecture Notes in Computer Science %N 9682
[234]
N. Alon, S. Moran, and A. Yehudayoff, “Sign Rank Versus VC Dimension,” in 29th Annual Conference on Learning Theory (COLT 2016), New York, NY, USA, 2016.
Export
BibTeX
@inproceedings{MoranCOLT2016, TITLE = {Sign Rank Versus {VC} Dimension}, AUTHOR = {Alon, Noga and Moran, Shay and Yehudayoff, Amir}, LANGUAGE = {eng}, ISSN = {1938-7228}, YEAR = {2016}, BOOKTITLE = {29th Annual Conference on Learning Theory (COLT 2016)}, EDITOR = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, SERIES = {JMLR Workshop \& Conference Proceedings}, VOLUME = {49}, ADDRESS = {New York, NY, USA}, }
Endnote
%0 Conference Proceedings %A Alon, Noga %A Moran, Shay %A Yehudayoff, Amir %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Sign Rank Versus VC Dimension : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-51AE-C %D 2016 %B 29th Annual Conference on Learning Theory %Z date of event: 2016-06-23 - 2016-06-26 %C New York, NY, USA %B 29th Annual Conference on Learning Theory %E Feldman, Vitaly; Rakhlin, Alexander; Shamir, Ohad %B JMLR Workshop & Conference Proceedings %N 49 %@ false
[235]
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann, and B. Wirtz, “Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization,” SFB/TR 14 AVACS, ATR103, 2016.
Abstract
This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states.
Export
BibTeX
@techreport{AlthausBeberDammEtAl2016ATR, TITLE = {Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization}, AUTHOR = {Althaus, Ernst and Beber, Bj{\"o}rn and Damm, Werner and Disch, Stefan and Hagemann, Willem and Rakow, Astrid and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris}, LANGUAGE = {eng}, ISSN = {1860-9821}, NUMBER = {ATR103}, INSTITUTION = {SFB/TR 14 AVACS}, YEAR = {2016}, DATE = {2016}, ABSTRACT = {This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states.}, TYPE = {AVACS Technical Report}, VOLUME = {103}, }
Endnote
%0 Report %A Althaus, Ernst %A Beber, Bj&#246;rn %A Damm, Werner %A Disch, Stefan %A Hagemann, Willem %A Rakow, Astrid %A Scholl, Christoph %A Waldmann, Uwe %A Wirtz, Boris %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society International Max Planck Research School, MPI for Informatics, Max Planck Society External Organizations External Organizations Automation of Logic, MPI for Informatics, Max Planck Society External Organizations %T Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4540-0 %Y SFB/TR 14 AVACS %D 2016 %P 93 p. %X This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states. %B AVACS Technical Report %N 103 %@ false %U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_103.pdf
[236]
A. Antoniadis, N. Barcelo, M. Nugent, K. Pruhs, K. Schewior, and M. Scquizzato, “Chasing Convex Bodies and Functions,” in LATIN 2016: Theoretical Informatics, Ensenada, Mexico, 2016.
Export
BibTeX
@inproceedings{AntoniadisLATIN2016, TITLE = {Chasing Convex Bodies and Functions}, AUTHOR = {Antoniadis, Antonios and Barcelo, Neal and Nugent, Michael and Pruhs, Kirk and Schewior, Kevin and Scquizzato, Michele}, LANGUAGE = {eng}, ISBN = {978-3-662-49528-5}, DOI = {10.1007/978-3-662-49529-2_6}, PUBLISHER = {Springer}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {LATIN 2016: Theoretical Informatics}, EDITOR = {Kranakis, Evangelos and Navarro, Gonzalo and Ch{\'a}vez, Edgar}, PAGES = {68--81}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {9644}, ADDRESS = {Ensenada, Mexico}, }
Endnote
%0 Conference Proceedings %A Antoniadis, Antonios %A Barcelo, Neal %A Nugent, Michael %A Pruhs, Kirk %A Schewior, Kevin %A Scquizzato, Michele %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Chasing Convex Bodies and Functions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-48D5-E %R 10.1007/978-3-662-49529-2_6 %D 2016 %B 12th Latin American Theoretical Informatics Symposium %Z date of event: 2016-04-11 - 2016-04-15 %C Ensenada, Mexico %B LATIN 2016: Theoretical Informatics %E Kranakis, Evangelos; Navarro, Gonzalo; Ch&#225;vez, Edgar %P 68 - 81 %I Springer %@ 978-3-662-49528-5 %B Lecture Notes in Computer Science %N 9644
[237]
J. Babu, M. Basavaraju, L. S. Chandran, and M. C. Francis, “On Induced Colourful Paths in Triangle-free Graphs,” 2016. [Online]. Available: http://arxiv.org/abs/1604.06070. (arXiv: 1604.06070)
Abstract
Given a graph $G=(V,E)$ whose vertices have been properly coloured, we say that a path in $G$ is "colourful" if no two vertices in the path have the same colour. It is a corollary of the Gallai-Roy Theorem that every properly coloured graph contains a colourful path on $\chi(G)$ vertices. It is interesting to think of what analogous result one could obtain if one considers induced colourful paths instead of just colourful paths. We explore a conjecture that states that every properly coloured triangle-free graph $G$ contains an induced colourful path on $\chi(G)$ vertices. As proving this conjecture in its fullest generality seems to be difficult, we study a special case of the conjecture. We show that the conjecture is true when the girth of $G$ is equal to $\chi(G)$. Even this special case of the conjecture does not seem to have an easy proof: our method involves a detailed analysis of a special kind of greedy colouring algorithm. This result settles the conjecture for every properly coloured triangle-free graph $G$ with girth at least $\chi(G)$.
Export
BibTeX
@online{BBCF2016, TITLE = {On Induced Colourful Paths in Triangle-free Graphs}, AUTHOR = {Babu, Jasine and Basavaraju, Manu and Chandran, L. Sunil and Francis, Mathew C.}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1604.06070}, EPRINT = {1604.06070}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Given a graph $G=(V,E)$ whose vertices have been properly coloured, we say that a path in $G$ is "colourful" if no two vertices in the path have the same colour. It is a corollary of the Gallai-Roy Theorem that every properly coloured graph contains a colourful path on $\chi(G)$ vertices. It is interesting to think of what analogous result one could obtain if one considers induced colourful paths instead of just colourful paths. We explore a conjecture that states that every properly coloured triangle-free graph $G$ contains an induced colourful path on $\chi(G)$ vertices. As proving this conjecture in its fullest generality seems to be difficult, we study a special case of the conjecture. We show that the conjecture is true when the girth of $G$ is equal to $\chi(G)$. Even this special case of the conjecture does not seem to have an easy proof: our method involves a detailed analysis of a special kind of greedy colouring algorithm. This result settles the conjecture for every properly coloured triangle-free graph $G$ with girth at least $\chi(G)$.}, }
Endnote
%0 Report %A Babu, Jasine %A Basavaraju, Manu %A Chandran, L. Sunil %A Francis, Mathew C. %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T On Induced Colourful Paths in Triangle-free Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-6134-C %U http://arxiv.org/abs/1604.06070 %D 2016 %X Given a graph $G=(V,E)$ whose vertices have been properly coloured, we say that a path in $G$ is "colourful" if no two vertices in the path have the same colour. It is a corollary of the Gallai-Roy Theorem that every properly coloured graph contains a colourful path on $\chi(G)$ vertices. It is interesting to think of what analogous result one could obtain if one considers induced colourful paths instead of just colourful paths. We explore a conjecture that states that every properly coloured triangle-free graph $G$ contains an induced colourful path on $\chi(G)$ vertices. As proving this conjecture in its fullest generality seems to be difficult, we study a special case of the conjecture. We show that the conjecture is true when the girth of $G$ is equal to $\chi(G)$. Even this special case of the conjecture does not seem to have an easy proof: our method involves a detailed analysis of a special kind of greedy colouring algorithm. This result settles the conjecture for every properly coloured triangle-free graph $G$ with girth at least $\chi(G)$. %K Mathematics, Combinatorics, math.CO,
[238]
R. Becker, M. Fickert, and A. Karrenbauer, “A Novel Dual Ascent Algorithm for Solving the Min-Cost Flow Problem,” in Proceedings of the Eighteenth Workshop on Algorithm Engineering and Experiments (ALENEX 2016), Arlington, VA, USA, 2016.
Export
BibTeX
@inproceedings{BeckerALENEX2016, TITLE = {A Novel Dual Ascent Algorithm for Solving the Min-Cost Flow Problem}, AUTHOR = {Becker, Ruben and Fickert, Maximilian and Karrenbauer, Andreas}, LANGUAGE = {eng}, ISBN = {978-1-61197-431-7}, DOI = {10.1137/1.9781611974317.13}, PUBLISHER = {SIAM}, YEAR = {2016}, BOOKTITLE = {Proceedings of the Eighteenth Workshop on Algorithm Engineering and Experiments (ALENEX 2016)}, EDITOR = {Goodrich, Michael and Mitzenmacher, Michael}, PAGES = {151--159}, ADDRESS = {Arlington, VA, USA}, }
Endnote
%0 Conference Proceedings %A Becker, Ruben %A Fickert, Maximilian %A Karrenbauer, Andreas %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T A Novel Dual Ascent Algorithm for Solving the Min-Cost Flow Problem : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-4AD0-6 %R 10.1137/1.9781611974317.13 %D 2016 %B Eighteenth Workshop on Algorithm Engineering and Experiments %Z date of event: 2016-01-10 - 2016-01-10 %C Arlington, VA, USA %B Proceedings of the Eighteenth Workshop on Algorithm Engineering and Experiments %E Goodrich, Michael; Mitzenmacher, Michael %P 151 - 159 %I SIAM %@ 978-1-61197-431-7
[239]
R. Becker, M. Sagraloff, V. Sharma, and C. Yap, “A Simple Near-Optimal Subdivision Algorithm for Complex Root Isolation based on the Pellet Test and Newton Iteration,” 2016. [Online]. Available: http://arxiv.org/abs/1509.06231. (arXiv: 1509.06231)
Abstract
We describe a subdivision algorithm for isolating the complex roots of a polynomial $F\in\mathbb{C}[x]$. Our model assumes that each coefficient of $F$ has an oracle to return an approximation to any absolute error bound. Given any box $\mathcal{B}$ in the complex plane containing only simple roots of $F$, our algorithm returns disjoint isolating disks for the roots in $\mathcal{B}$. Our complexity analysis bounds the absolute error to which the coefficients of $F$ have to be provided, the total number of iterations, and the overall bit complexity. This analysis shows that the complexity of our algorithm is controlled by the geometry of the roots in a near neighborhood of the input box $\mathcal{B}$, namely, the number of roots and their pairwise distances. The number of subdivision steps is near-optimal. For the \emph{benchmark problem}, namely, to isolate all the roots of an integer polynomial of degree $n$ with coefficients of bitsize less than $\tau$, our algorithm needs $\tilde{O}(n^3+n^2\tau)$ bit operations, which is comparable to the record bound of Pan (2002). It is the first time that such a bound has been achieved using subdivision methods, and independent of divide-and-conquer techniques such as Sch\"onhage's splitting circle technique. Our algorithm uses the quadtree construction of Weyl (1924) with two key ingredients: using Pellet's Theorem (1881) combined with Graeffe iteration, we derive a soft test to count the number of roots in a disk. Using Newton iteration combined with bisection, in a form inspired by the quadratic interval method from Abbot (2006), we achieve quadratic convergence towards root clusters. Relative to the divide-conquer algorithms, our algorithm is simple with the potential of being practical. This paper is self-contained: we provide pseudo-code for all subroutines used by our algorithm.
Export
BibTeX
@online{BeckerarXiv2016, TITLE = {A Simple Near-Optimal Subdivision Algorithm for Complex Root Isolation based on the Pellet Test and Newton Iteration}, AUTHOR = {Becker, Ruben and Sagraloff, Michael and Sharma, Vikram and Yap, Chee}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1509.06231}, EPRINT = {1509.06231}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We describe a subdivision algorithm for isolating the complex roots of a polynomial $F\in\mathbb{C}[x]$. Our model assumes that each coefficient of $F$ has an oracle to return an approximation to any absolute error bound. Given any box $\mathcal{B}$ in the complex plane containing only simple roots of $F$, our algorithm returns disjoint isolating disks for the roots in $\mathcal{B}$. Our complexity analysis bounds the absolute error to which the coefficients of $F$ have to be provided, the total number of iterations, and the overall bit complexity. This analysis shows that the complexity of our algorithm is controlled by the geometry of the roots in a near neighborhood of the input box $\mathcal{B}$, namely, the number of roots and their pairwise distances. The number of subdivision steps is near-optimal. For the \emph{benchmark problem}, namely, to isolate all the roots of an integer polynomial of degree $n$ with coefficients of bitsize less than $\tau$, our algorithm needs $\tilde{O}(n^3+n^2\tau)$ bit operations, which is comparable to the record bound of Pan (2002). It is the first time that such a bound has been achieved using subdivision methods, and independent of divide-and-conquer techniques such as Sch\"onhage's splitting circle technique. Our algorithm uses the quadtree construction of Weyl (1924) with two key ingredients: using Pellet's Theorem (1881) combined with Graeffe iteration, we derive a soft test to count the number of roots in a disk. Using Newton iteration combined with bisection, in a form inspired by the quadratic interval method from Abbot (2006), we achieve quadratic convergence towards root clusters. Relative to the divide-conquer algorithms, our algorithm is simple with the potential of being practical. This paper is self-contained: we provide pseudo-code for all subroutines used by our algorithm.}, }
Endnote
%0 Report %A Becker, Ruben %A Sagraloff, Michael %A Sharma, Vikram %A Yap, Chee %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Simple Near-Optimal Subdivision Algorithm for Complex Root Isolation based on the Pellet Test and Newton Iteration : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-02B8-2 %U http://arxiv.org/abs/1509.06231 %D 2016 %X We describe a subdivision algorithm for isolating the complex roots of a polynomial $F\in\mathbb{C}[x]$. Our model assumes that each coefficient of $F$ has an oracle to return an approximation to any absolute error bound. Given any box $\mathcal{B}$ in the complex plane containing only simple roots of $F$, our algorithm returns disjoint isolating disks for the roots in $\mathcal{B}$. Our complexity analysis bounds the absolute error to which the coefficients of $F$ have to be provided, the total number of iterations, and the overall bit complexity. This analysis shows that the complexity of our algorithm is controlled by the geometry of the roots in a near neighborhood of the input box $\mathcal{B}$, namely, the number of roots and their pairwise distances. The number of subdivision steps is near-optimal. For the \emph{benchmark problem}, namely, to isolate all the roots of an integer polynomial of degree $n$ with coefficients of bitsize less than $\tau$, our algorithm needs $\tilde{O}(n^3+n^2\tau)$ bit operations, which is comparable to the record bound of Pan (2002). It is the first time that such a bound has been achieved using subdivision methods, and independent of divide-and-conquer techniques such as Sch\"onhage's splitting circle technique. Our algorithm uses the quadtree construction of Weyl (1924) with two key ingredients: using Pellet's Theorem (1881) combined with Graeffe iteration, we derive a soft test to count the number of roots in a disk. Using Newton iteration combined with bisection, in a form inspired by the quadratic interval method from Abbot (2006), we achieve quadratic convergence towards root clusters. Relative to the divide-conquer algorithms, our algorithm is simple with the potential of being practical. This paper is self-contained: we provide pseudo-code for all subroutines used by our algorithm. %K Computer Science, Numerical Analysis, cs.NA,Computer Science, Symbolic Computation, cs.SC,Mathematics, Numerical Analysis, math.NA
[240]
R. Becker, A. Karrenbauer, and K. Mehlhorn, “An Integer Interior Point Method for Min-Cost Flow Using Arc Contractions and Deletions,” 2016. [Online]. Available: http://arxiv.org/abs/1612.04689. (arXiv: 1612.04689)
Abstract
We present an interior point method for the min-cost flow problem that uses arc contractions and deletions to steer clear from the boundary of the polytope when path-following methods come too close. We obtain a randomized algorithm running in expected $\tilde O( m^{3/2} )$ time that only visits integer lattice points in the vicinity of the central path of the polytope. This enables us to use integer arithmetic like classical combinatorial algorithms typically do. We provide explicit bounds on the size of the numbers that appear during all computations. By presenting an integer arithmetic interior point algorithm we avoid the tediousness of floating point error analysis and achieve a method that is guaranteed to be free of any numerical issues. We thereby eliminate one of the drawbacks of numerical methods in contrast to combinatorial min-cost flow algorithms that still yield the most efficient implementations in practice, despite their inferior worst-case time complexity.
Export
BibTeX
@online{DBLP:journals/corr/BeckerKM16, TITLE = {An Integer Interior Point Method for Min-Cost Flow Using Arc Contractions and Deletions}, AUTHOR = {Becker, Ruben and Karrenbauer, Andreas and Mehlhorn, Kurt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1612.04689}, EPRINT = {1612.04689}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We present an interior point method for the min-cost flow problem that uses arc contractions and deletions to steer clear from the boundary of the polytope when path-following methods come too close. We obtain a randomized algorithm running in expected $\tilde O( m^{3/2} )$ time that only visits integer lattice points in the vicinity of the central path of the polytope. This enables us to use integer arithmetic like classical combinatorial algorithms typically do. We provide explicit bounds on the size of the numbers that appear during all computations. By presenting an integer arithmetic interior point algorithm we avoid the tediousness of floating point error analysis and achieve a method that is guaranteed to be free of any numerical issues. We thereby eliminate one of the drawbacks of numerical methods in contrast to combinatorial min-cost flow algorithms that still yield the most efficient implementations in practice, despite their inferior worst-case time complexity.}, }
Endnote
%0 Report %A Becker, Ruben %A Karrenbauer, Andreas %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T An Integer Interior Point Method for Min-Cost Flow Using Arc Contractions and Deletions : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5714-E %U http://arxiv.org/abs/1612.04689 %D 2016 %X We present an interior point method for the min-cost flow problem that uses arc contractions and deletions to steer clear from the boundary of the polytope when path-following methods come too close. We obtain a randomized algorithm running in expected $\tilde O( m^{3/2} )$ time that only visits integer lattice points in the vicinity of the central path of the polytope. This enables us to use integer arithmetic like classical combinatorial algorithms typically do. We provide explicit bounds on the size of the numbers that appear during all computations. By presenting an integer arithmetic interior point algorithm we avoid the tediousness of floating point error analysis and achieve a method that is guaranteed to be free of any numerical issues. We thereby eliminate one of the drawbacks of numerical methods in contrast to combinatorial min-cost flow algorithms that still yield the most efficient implementations in practice, despite their inferior worst-case time complexity. %K Computer Science, Data Structures and Algorithms, cs.DS,Mathematics, Numerical Analysis, math.NA,Mathematics, Optimization and Control, math.OC
[241]
R. Becker, M. Sagraloff, V. Sharma, J. Xu, and C. Yap, “Complexity Analysis of Root Clustering for a Complex Polynomial,” in ISSAC 2016, 41st International Symposium on Symbolic and Algebraic Computation, Waterloo, Canada, 2016.
Export
BibTeX
@inproceedings{BeckerISSAC2016, TITLE = {Complexity Analysis of Root Clustering for a Complex Polynomial}, AUTHOR = {Becker, Ruben and Sagraloff, Michael and Sharma, Vikram and Xu, Juan and Yap, Chee}, LANGUAGE = {eng}, ISBN = {978-1-4503-4380-0}, DOI = {10.1145/2930889.2930939}, PUBLISHER = {ACM}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {ISSAC 2016, 41st International Symposium on Symbolic and Algebraic Computation}, EDITOR = {Rosenkranz, Markus}, PAGES = {71--78}, ADDRESS = {Waterloo, Canada}, }
Endnote
%0 Conference Proceedings %A Becker, Ruben %A Sagraloff, Michael %A Sharma, Vikram %A Xu, Juan %A Yap, Chee %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Complexity Analysis of Root Clustering for a Complex Polynomial : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-02C0-E %R 10.1145/2930889.2930939 %D 2016 %B 41st International Symposium on Symbolic and Algebraic Computation %Z date of event: 2016-06-19 - 2016-06-22 %C Waterloo, Canada %B ISSAC 2016 %E Rosenkranz, Markus %P 71 - 78 %I ACM %@ 978-1-4503-4380-0
[242]
R. Becker, A. Karrenbauer, S. Krinninger, and C. Lenzen, “Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models,” 2016. [Online]. Available: http://arxiv.org/abs/1607.05127. (arXiv: 1607.05127)
Abstract
We present a method for solving the transshipment problem - also known as uncapacitated minimum cost flow - up to a multiplicative error of $1 + \epsilon$ in undirected graphs with polynomially bounded integer edge weights using a tailored gradient descent algorithm. An important special case of the transshipment problem is the single-source shortest paths (SSSP) problem. Our gradient descent algorithm takes $O(\epsilon^{-3} \mathrm{polylog} n)$ iterations and in each iteration it needs to solve a variant of the transshipment problem up to a multiplicative error of $\mathrm{polylog} n$. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. As a consequence, we improve prior work by obtaining the following results: (1) RAM model: $(1+\epsilon)$-approximate transshipment in $\tilde{O}(\epsilon^{-3}(m + n^{1 + o(1)}))$ computational steps (leveraging a recent $O(m^{1+o(1)})$-step $O(1)$-approximation due to Sherman [2016]). (2) Multipass Streaming model: $(1 + \epsilon)$-approximate transshipment and SSSP using $\tilde{O}(n) $ space and $\tilde{O}(\epsilon^{-O(1)})$ passes. (3) Broadcast Congested Clique model: $(1 + \epsilon)$-approximate transshipment and SSSP using $\tilde{O}(\epsilon^{-O(1)})$ rounds. (4) Broadcast Congest model: $(1 + \epsilon)$-approximate SSSP using $\tilde{O}(\epsilon^{-O(1)}(\sqrt{n} + D))$ rounds, where $ D $ is the (hop) diameter of the network. The previous fastest algorithms for the last three models above leverage sparse hop sets. We bypass the hop set computation; using a spanner is sufficient in our method. The above bounds assume non-negative integer edge weights that are polynomially bounded in $n$; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights.
Export
BibTeX
@online{Becker_arXiv1607.05127, TITLE = {Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models}, AUTHOR = {Becker, Ruben and Karrenbauer, Andreas and Krinninger, Sebastian and Lenzen, Christoph}, URL = {http://arxiv.org/abs/1607.05127}, EPRINT = {1607.05127}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We present a method for solving the transshipment problem -- also known as uncapacitated minimum cost flow -- up to a multiplicative error of $1 + \epsilon$ in undirected graphs with polynomially bounded integer edge weights using a tailored gradient descent algorithm. An important special case of the transshipment problem is the single-source shortest paths (SSSP) problem. Our gradient descent algorithm takes $O(\epsilon^{-3} \mathrm{polylog} n)$ iterations and in each iteration it needs to solve a variant of the transshipment problem up to a multiplicative error of $\mathrm{polylog} n$. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. As a consequence, we improve prior work by obtaining the following results: (1) RAM model: $(1+\epsilon)$-approximate transshipment in $\tilde{O}(\epsilon^{-3}(m + n^{1 + o(1)}))$ computational steps (leveraging a recent $O(m^{1+o(1)})$-step $O(1)$-approximation due to Sherman [2016]). (2) Multipass Streaming model: $(1 + \epsilon)$-approximate transshipment and SSSP using $\tilde{O}(n) $ space and $\tilde{O}(\epsilon^{-O(1)})$ passes. (3) Broadcast Congested Clique model: $(1 + \epsilon)$-approximate transshipment and SSSP using $\tilde{O}(\epsilon^{-O(1)})$ rounds. (4) Broadcast Congest model: $(1 + \epsilon)$-approximate SSSP using $\tilde{O}(\epsilon^{-O(1)}(\sqrt{n} + D))$ rounds, where $ D $ is the (hop) diameter of the network. The previous fastest algorithms for the last three models above leverage sparse hop sets. We bypass the hop set computation; using a spanner is sufficient in our method. The above bounds assume non-negative integer edge weights that are polynomially bounded in $n$; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights.}, }
Endnote
%0 Report %A Becker, Ruben %A Karrenbauer, Andreas %A Krinninger, Sebastian %A Lenzen, Christoph %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models : %U http://hdl.handle.net/11858/00-001M-0000-002B-8419-1 %U http://arxiv.org/abs/1607.05127 %D 2016 %X We present a method for solving the transshipment problem - also known as uncapacitated minimum cost flow - up to a multiplicative error of $1 + \epsilon$ in undirected graphs with polynomially bounded integer edge weights using a tailored gradient descent algorithm. An important special case of the transshipment problem is the single-source shortest paths (SSSP) problem. Our gradient descent algorithm takes $O(\epsilon^{-3} \mathrm{polylog} n)$ iterations and in each iteration it needs to solve a variant of the transshipment problem up to a multiplicative error of $\mathrm{polylog} n$. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. As a consequence, we improve prior work by obtaining the following results: (1) RAM model: $(1+\epsilon)$-approximate transshipment in $\tilde{O}(\epsilon^{-3}(m + n^{1 + o(1)}))$ computational steps (leveraging a recent $O(m^{1+o(1)})$-step $O(1)$-approximation due to Sherman [2016]). (2) Multipass Streaming model: $(1 + \epsilon)$-approximate transshipment and SSSP using $\tilde{O}(n) $ space and $\tilde{O}(\epsilon^{-O(1)})$ passes. (3) Broadcast Congested Clique model: $(1 + \epsilon)$-approximate transshipment and SSSP using $\tilde{O}(\epsilon^{-O(1)})$ rounds. (4) Broadcast Congest model: $(1 + \epsilon)$-approximate SSSP using $\tilde{O}(\epsilon^{-O(1)}(\sqrt{n} + D))$ rounds, where $ D $ is the (hop) diameter of the network. The previous fastest algorithms for the last three models above leverage sparse hop sets. We bypass the hop set computation; using a spanner is sufficient in our method. The above bounds assume non-negative integer edge weights that are polynomially bounded in $n$; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights. %K Computer Science, Data Structures and Algorithms, cs.DS
[243]
X. Bei, J. Garg, and M. Hoefer, “Ascending-Price Algorithms for Unknown Markets,” in EC’16, ACM Conference on Economics and Computation, Maastricht, The Netherlands, 2016.
Export
BibTeX
@inproceedings{BeiEC2016a, TITLE = {Ascending-Price Algorithms for Unknown Markets}, AUTHOR = {Bei, Xiaohui and Garg, Jugal and Hoefer, Martin}, LANGUAGE = {eng}, ISBN = {978-1-4503-3936-0}, DOI = {10.1145/2940716.2940765}, PUBLISHER = {ACM}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {EC'16, ACM Conference on Economics and Computation}, PAGES = {699--699}, ADDRESS = {Maastricht, The Netherlands}, }
Endnote
%0 Conference Proceedings %A Bei, Xiaohui %A Garg, Jugal %A Hoefer, Martin %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Ascending-Price Algorithms for Unknown Markets : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-841F-6 %R 10.1145/2940716.2940765 %D 2016 %B ACM Conference on Economics and Computation %Z date of event: 2016-07-24 - 2016-07-28 %C Maastricht, The Netherlands %B EC'16 %P 699 - 699 %I ACM %@ 978-1-4503-3936-0
[244]
X. Bei, J. Garg, M. Hoefer, and K. Mehlhorn, “Computing Equilibria in Markets with Budget-Additive Utilities,” in 24th Annual European Symposium on Algorithms (ESA 2016), Aarhus, Denmark, 2016.
Export
BibTeX
@inproceedings{BeiESA2016, TITLE = {Computing Equilibria in Markets with Budget-Additive Utilities}, AUTHOR = {Bei, Xiaohui and Garg, Jugal and Hoefer, Martin and Mehlhorn, Kurt}, LANGUAGE = {eng}, ISBN = {978-3-95977-015-6}, URL = {urn:nbn:de:0030-drops-63504}, DOI = {10.4230/LIPIcs.ESA.2016.8}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2016}, BOOKTITLE = {24th Annual European Symposium on Algorithms (ESA 2016)}, EDITOR = {Sankowski, Piotr and Zaroliagis, Christos}, PAGES = {1--14}, EID = {8}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {57}, ADDRESS = {Aarhus, Denmark}, }
Endnote
%0 Conference Proceedings %A Bei, Xiaohui %A Garg, Jugal %A Hoefer, Martin %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Computing Equilibria in Markets with Budget-Additive Utilities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-479B-5 %R 10.4230/LIPIcs.ESA.2016.8 %U urn:nbn:de:0030-drops-63504 %D 2016 %B 24th Annual European Symposium on Algorithms %Z date of event: 2016-08-22 - 2016-08-26 %C Aarhus, Denmark %B 24th Annual European Symposium on Algorithms %E Sankowski, Piotr; Zaroliagis, Christos %P 1 - 14 %Z sequence number: 8 %I Schloss Dagstuhl %@ 978-3-95977-015-6 %B Leibniz International Proceedings in Informatics %N 57 %U http://drops.dagstuhl.de/opus/volltexte/2016/6350/http://drops.dagstuhl.de/doku/urheberrecht1.html
[245]
X. Bei, J. Garg, M. Hoefer, and K. Mehlhorn, “Computing Equilibria in Markets with Budget-Additive Utilities,” 2016. [Online]. Available: http://arxiv.org/abs/1603.07210. (arXiv: 1603.07210)
Abstract
We present the first analysis of Fisher markets with buyers that have budget-additive utility functions. Budget-additive utilities are elementary concave functions with numerous applications in online adword markets and revenue optimization problems. They extend the standard case of linear utilities and have been studied in a variety of other market models. In contrast to the frequently studied CES utilities, they have a global satiation point which can imply multiple market equilibria with quite different characteristics. Our main result is an efficient combinatorial algorithm to compute a market equilibrium with a Pareto-optimal allocation of goods. It relies on a new descending-price approach and, as a special case, also implies a novel combinatorial algorithm for computing a market equilibrium in linear Fisher markets. We complement these positive results with a number of hardness results for related computational questions. We prove that it is NP-hard to compute a market equilibrium that maximizes social welfare, and it is PPAD-hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good.
Export
BibTeX
@online{BeiGargHoeferMehlhorn2016, TITLE = {Computing Equilibria in Markets with Budget-Additive Utilities}, AUTHOR = {Bei, Xiaohui and Garg, Jugal and Hoefer, Martin and Mehlhorn, Kurt}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1603.07210}, EPRINT = {1603.07210}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We present the first analysis of Fisher markets with buyers that have budget-additive utility functions. Budget-additive utilities are elementary concave functions with numerous applications in online adword markets and revenue optimization problems. They extend the standard case of linear utilities and have been studied in a variety of other market models. In contrast to the frequently studied CES utilities, they have a global satiation point which can imply multiple market equilibria with quite different characteristics. Our main result is an efficient combinatorial algorithm to compute a market equilibrium with a Pareto-optimal allocation of goods. It relies on a new descending-price approach and, as a special case, also implies a novel combinatorial algorithm for computing a market equilibrium in linear Fisher markets. We complement these positive results with a number of hardness results for related computational questions. We prove that it is NP-hard to compute a market equilibrium that maximizes social welfare, and it is PPAD-hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good.}, }
Endnote
%0 Report %A Bei, Xiaohui %A Garg, Jugal %A Hoefer, Martin %A Mehlhorn, Kurt %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Computing Equilibria in Markets with Budget-Additive Utilities : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002A-FCC0-C %U http://arxiv.org/abs/1603.07210 %D 2016 %X We present the first analysis of Fisher markets with buyers that have budget-additive utility functions. Budget-additive utilities are elementary concave functions with numerous applications in online adword markets and revenue optimization problems. They extend the standard case of linear utilities and have been studied in a variety of other market models. In contrast to the frequently studied CES utilities, they have a global satiation point which can imply multiple market equilibria with quite different characteristics. Our main result is an efficient combinatorial algorithm to compute a market equilibrium with a Pareto-optimal allocation of goods. It relies on a new descending-price approach and, as a special case, also implies a novel combinatorial algorithm for computing a market equilibrium in linear Fisher markets. We complement these positive results with a number of hardness results for related computational questions. We prove that it is NP-hard to compute a market equilibrium that maximizes social welfare, and it is PPAD-hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good. %K Computer Science, Computer Science and Game Theory, cs.GT,Computer Science, Data Structures and Algorithms, cs.DS
[246]
X. Bei, W. Chen, J. Garg, M. Hoefer, and X. Sun, “Learning Market Parameters Using Aggregate Demand Queries,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 2016.
Export
BibTeX
@inproceedings{BeiAAAI2016, TITLE = {Learning Market Parameters Using Aggregate Demand Queries}, AUTHOR = {Bei, Xiaohui and Chen, Wei and Garg, Jugal and Hoefer, Martin and Sun, Xiaoming}, LANGUAGE = {eng}, ISBN = {978-1-57735-760-5}, PUBLISHER = {AAAI}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence}, PAGES = {404--410}, ADDRESS = {Phoenix, AZ, USA}, }
Endnote
%0 Conference Proceedings %A Bei, Xiaohui %A Chen, Wei %A Garg, Jugal %A Hoefer, Martin %A Sun, Xiaoming %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T Learning Market Parameters Using Aggregate Demand Queries : %G eng %U http://hdl.handle.net/11858/00-001M-0000-0029-AC36-C %D 2016 %B Thirtieth AAAI Conference on Artificial Intelligence %Z date of event: 2016-02-12 - 2016-02-17 %C Phoenix, AZ, USA %B Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence %P 404 - 410 %I AAAI %@ 978-1-57735-760-5 %U http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12052/11612
[247]
A. Bishnu, K. Dutta, A. Ghosh, and S. Paul, “(1,j)-set Problem in Graphs,” Discrete Mathematics, vol. 339, no. 10, 2016.
Export
BibTeX
@article{DBLP:journals/dm/BishnuDGP16, TITLE = {$(1,j)$-Set Problem in Graphs}, AUTHOR = {Bishnu, Arijit and Dutta, Kunal and Ghosh, Arijit and Paul, Subhabrata}, LANGUAGE = {eng}, ISSN = {0012-365X}, DOI = {10.1016/j.disc.2016.04.008}, PUBLISHER = {Elsevier}, ADDRESS = {Amsterdam}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Discrete Mathematics}, VOLUME = {339}, NUMBER = {10}, PAGES = {2515--2525}, }
Endnote
%0 Journal Article %A Bishnu, Arijit %A Dutta, Kunal %A Ghosh, Arijit %A Paul, Subhabrata %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations %T (1,j)-set Problem in Graphs : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-65B7-2 %R 10.1016/j.disc.2016.04.008 %7 2014-10-12 %D 2016 %J Discrete Mathematics %V 339 %N 10 %& 2515 %P 2515 - 2525 %I Elsevier %C Amsterdam %@ false
[248]
M. Bläser, G. Jindal, and A. Pandey, “Greedy Strikes Again: A Deterministic PTAS for Commutative Rank of Matrix Spaces,” Electronic Colloquium on Computational Complexity (ECCC): Report Series, vol. 145, 2016.
Export
BibTeX
@article{DBLP:journals/eccc/BlaserJP16, TITLE = {Greedy Strikes Again: {A} Deterministic {PTAS} for Commutative Rank of Matrix Spaces}, AUTHOR = {Bl{\"a}ser, Markus and Jindal, Gorav and Pandey, Anurag}, LANGUAGE = {eng}, ISSN = {1433-8092}, PUBLISHER = {Hasso-Plattner-Institut f{\"u}r Softwaretechnik GmbH}, ADDRESS = {Potsdam}, YEAR = {2016}, JOURNAL = {Electronic Colloquium on Computational Complexity (ECCC): Report Series}, VOLUME = {145}, PAGES = {1--12}, }
Endnote
%0 Journal Article %A Bl&#228;ser, Markus %A Jindal, Gorav %A Pandey, Anurag %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Greedy Strikes Again: A Deterministic PTAS for Commutative Rank of Matrix Spaces : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-4E5A-A %7 2016 %D 2016 %J Electronic Colloquium on Computational Complexity (ECCC): Report Series %V 145 %& 1 %P 1 - 12 %I Hasso-Plattner-Institut f&#252;r Softwaretechnik GmbH %C Potsdam %@ false %U https://eccc.weizmann.ac.il/report/2016/145/
[249]
G. Bodwin and S. Krinninger, “Fully Dynamic Spanners with Worst-Case Update Time,” 2016. [Online]. Available: http://arxiv.org/abs/1606.07864. (arXiv: 1606.07864)
Abstract
An $\alpha$-spanner of a graph $ G $ is a subgraph $ H $ such that $ H $ preserves all distances of $ G $ within a factor of $ \alpha $. In this paper, we give fully dynamic algorithms for maintaining a spanner $ H $ of a graph $ G $ undergoing edge insertions and deletions with worst-case guarantees on the running time after each update. In particular, our algorithms maintain: (1) a $3$-spanner with $ \tilde O (n^{1+1/2}) $ edges with worst-case update time $ \tilde O (n^{3/4}) $, or (2) a $5$-spanner with $ \tilde O (n^{1+1/3}) $ edges with worst-case update time $ \tilde O (n^{5/9}) $. These size/stretch tradeoffs are best possible (up to logarithmic factors). They can be extended to the weighted setting at very minor cost. Our algorithms are randomized and correct with high probability against an oblivious adversary. We also further extend our techniques to construct a $5$-spanner with suboptimal size/stretch tradeoff, but improved worst-case update time. To the best of our knowledge, these are the first dynamic spanner algorithms with sublinear worst-case update time guarantees. Since it is known how to maintain a spanner using small amortized but large worst-case update time [Baswana et al. SODA'08], obtaining algorithms with strong worst-case bounds, as presented in this paper, seems to be the next natural step for this problem.
Export
BibTeX
@online{BodwinK2016, TITLE = {Fully Dynamic Spanners with Worst-Case Update Time}, AUTHOR = {Bodwin, Greg and Krinninger, Sebastian}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1606.07864}, EPRINT = {1606.07864}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {An $\alpha$-spanner of a graph $ G $ is a subgraph $ H $ such that $ H $ preserves all distances of $ G $ within a factor of $ \alpha $. In this paper, we give fully dynamic algorithms for maintaining a spanner $ H $ of a graph $ G $ undergoing edge insertions and deletions with worst-case guarantees on the running time after each update. In particular, our algorithms maintain: (1) a $3$-spanner with $ \tilde O (n^{1+1/2}) $ edges with worst-case update time $ \tilde O (n^{3/4}) $, or (2) a $5$-spanner with $ \tilde O (n^{1+1/3}) $ edges with worst-case update time $ \tilde O (n^{5/9}) $. These size/stretch tradeoffs are best possible (up to logarithmic factors). They can be extended to the weighted setting at very minor cost. Our algorithms are randomized and correct with high probability against an oblivious adversary. We also further extend our techniques to construct a $5$-spanner with suboptimal size/stretch tradeoff, but improved worst-case update time. To the best of our knowledge, these are the first dynamic spanner algorithms with sublinear worst-case update time guarantees. Since it is known how to maintain a spanner using small amortized but large worst-case update time [Baswana et al. SODA'08], obtaining algorithms with strong worst-case bounds, as presented in this paper, seems to be the next natural step for this problem.}, }
Endnote
%0 Report %A Bodwin, Greg %A Krinninger, Sebastian %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fully Dynamic Spanners with Worst-Case Update Time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-50FF-B %U http://arxiv.org/abs/1606.07864 %D 2016 %X An $\alpha$-spanner of a graph $ G $ is a subgraph $ H $ such that $ H $ preserves all distances of $ G $ within a factor of $ \alpha $. In this paper, we give fully dynamic algorithms for maintaining a spanner $ H $ of a graph $ G $ undergoing edge insertions and deletions with worst-case guarantees on the running time after each update. In particular, our algorithms maintain: (1) a $3$-spanner with $ \tilde O (n^{1+1/2}) $ edges with worst-case update time $ \tilde O (n^{3/4}) $, or (2) a $5$-spanner with $ \tilde O (n^{1+1/3}) $ edges with worst-case update time $ \tilde O (n^{5/9}) $. These size/stretch tradeoffs are best possible (up to logarithmic factors). They can be extended to the weighted setting at very minor cost. Our algorithms are randomized and correct with high probability against an oblivious adversary. We also further extend our techniques to construct a $5$-spanner with suboptimal size/stretch tradeoff, but improved worst-case update time. To the best of our knowledge, these are the first dynamic spanner algorithms with sublinear worst-case update time guarantees. Since it is known how to maintain a spanner using small amortized but large worst-case update time [Baswana et al. SODA'08], obtaining algorithms with strong worst-case bounds, as presented in this paper, seems to be the next natural step for this problem. %K Computer Science, Data Structures and Algorithms, cs.DS
[250]
G. Bodwin and S. Krinninger, “Fully Dynamic Spanners with Worst-Case Update Time,” in 24th Annual European Symposium on Algorithms (ESA 2016), Aarhus, Denmark, 2016.
Export
BibTeX
@inproceedings{BodwinK16, TITLE = {Fully Dynamic Spanners with Worst-Case Update Time}, AUTHOR = {Bodwin, Greg and Krinninger, Sebastian}, LANGUAGE = {eng}, ISSN = {1868-8969}, ISBN = {978-3-95977-015-6}, DOI = {10.4230/LIPIcs.ESA.2016.17}, PUBLISHER = {Schloss Dagstuhl}, YEAR = {2016}, BOOKTITLE = {24th Annual European Symposium on Algorithms (ESA 2016)}, EDITOR = {Sankowski, Piotr and Zaroliagis, Christos}, PAGES = {1--18}, EID = {17}, SERIES = {Leibniz International Proceedings in Informatics}, VOLUME = {57}, ADDRESS = {Aarhus, Denmark}, }
Endnote
%0 Conference Proceedings %A Bodwin, Greg %A Krinninger, Sebastian %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Fully Dynamic Spanners with Worst-Case Update Time : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-52CC-D %R 10.4230/LIPIcs.ESA.2016.17 %D 2016 %B 24th Annual European Symposium on Algorithms %Z date of event: 2016-08-22 - 2016-08-26 %C Aarhus, Denmark %B 24th Annual European Symposium on Algorithms %E Sankowski, Piotr; Zaroliagis, Christos %P 1 - 18 %Z sequence number: 17 %I Schloss Dagstuhl %@ 978-3-95977-015-6 %B Leibniz International Proceedings in Informatics %N 57 %@ false %U http://drops.dagstuhl.de/opus/volltexte/2016/6368/http://drops.dagstuhl.de/doku/urheberrecht1.html
[251]
Y. Bouzidi, S. Lazard, G. Moroz, M. Pouget, F. Rouillier, and M. Sagraloff, “Solving Bivariate Systems Using Rational Univariate Representations,” Journal of Complexity, vol. 37, 2016.
Export
BibTeX
@article{Bouzidi2016, TITLE = {Solving bivariate systems using {Rational Univariate Representations}}, AUTHOR = {Bouzidi, Yacine and Lazard, Sylvain and Moroz, Guillaume and Pouget, Marc and Rouillier, Fabrice and Sagraloff, Michael}, LANGUAGE = {eng}, ISSN = {0885-064X}, DOI = {10.1016/j.jco.2016.07.002}, PUBLISHER = {Academic Press}, ADDRESS = {Orlando, Fla.}, YEAR = {2016}, DATE = {2016}, JOURNAL = {Journal of Complexity}, VOLUME = {37}, PAGES = {34--75}, }
Endnote
%0 Journal Article %A Bouzidi, Yacine %A Lazard, Sylvain %A Moroz, Guillaume %A Pouget, Marc %A Rouillier, Fabrice %A Sagraloff, Michael %+ External Organizations External Organizations External Organizations External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Solving Bivariate Systems Using Rational Univariate Representations : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-841C-C %R 10.1016/j.jco.2016.07.002 %7 2016-07-12 %D 2016 %J Journal of Complexity %V 37 %& 34 %P 34 - 75 %I Academic Press %C Orlando, Fla. %@ false
[252]
C. Brand and M. Sagraloff, “On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection,” 2016. [Online]. Available: http://arxiv.org/abs/1604.08944. (arXiv: 1604.08944)
Abstract
Given a zero-dimensional polynomial system consisting of n integer polynomials in n variables, we propose a certified and complete method to compute all complex solutions of the system as well as a corresponding separating linear form l with coefficients of small bit size. For computing l, we need to project the solutions into one dimension along O(n) distinct directions but no further algebraic manipulations. The solutions are then directly reconstructed from the considered projections. The first step is deterministic, whereas the second step uses randomization, thus being Las-Vegas. The theoretical analysis of our approach shows that the overall cost for the two problems considered above is dominated by the cost of carrying out the projections. We also give bounds on the bit complexity of our algorithms that are exclusively stated in terms of the number of variables, the total degree and the bitsize of the input polynomials.
Export
BibTeX
@online{BrandarXiv2016, TITLE = {On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection}, AUTHOR = {Brand, Cornelius and Sagraloff, Michael}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1604.08944}, EPRINT = {1604.08944}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {Given a zero-dimensional polynomial system consisting of n integer polynomials in n variables, we propose a certified and complete method to compute all complex solutions of the system as well as a corresponding separating linear form l with coefficients of small bit size. For computing l, we need to project the solutions into one dimension along O(n) distinct directions but no further algebraic manipulations. The solutions are then directly reconstructed from the considered projections. The first step is deterministic, whereas the second step uses randomization, thus being Las-Vegas. The theoretical analysis of our approach shows that the overall cost for the two problems considered above is dominated by the cost of carrying out the projections. We also give bounds on the bit complexity of our algorithms that are exclusively stated in terms of the number of variables, the total degree and the bitsize of the input polynomials.}, }
Endnote
%0 Report %A Brand, Cornelius %A Sagraloff, Michael %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-02AF-7 %U http://arxiv.org/abs/1604.08944 %D 2016 %X Given a zero-dimensional polynomial system consisting of n integer polynomials in n variables, we propose a certified and complete method to compute all complex solutions of the system as well as a corresponding separating linear form l with coefficients of small bit size. For computing l, we need to project the solutions into one dimension along O(n) distinct directions but no further algebraic manipulations. The solutions are then directly reconstructed from the considered projections. The first step is deterministic, whereas the second step uses randomization, thus being Las-Vegas. The theoretical analysis of our approach shows that the overall cost for the two problems considered above is dominated by the cost of carrying out the projections. We also give bounds on the bit complexity of our algorithms that are exclusively stated in terms of the number of variables, the total degree and the bitsize of the input polynomials. %K Computer Science, Symbolic Computation, cs.SC,Computer Science, Computational Complexity, cs.CC
[253]
C. Brand and M. Sagraloff, “On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection,” in ISSAC 2016, 41st International Symposium on Symbolic and Algebraic Computation, Waterloo, Canada, 2016.
Export
BibTeX
@inproceedings{BrandISSAC2016, TITLE = {On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection}, AUTHOR = {Brand, Cornelius and Sagraloff, Michael}, LANGUAGE = {eng}, ISBN = {978-1-4503-4380-0}, DOI = {10.1145/2930889.2930934}, PUBLISHER = {ACM}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {ISSAC 2016, 41st International Symposium on Symbolic and Algebraic Computation}, EDITOR = {Rosenkranz, Markus}, PAGES = {151--158}, ADDRESS = {Waterloo, Canada}, }
Endnote
%0 Conference Proceedings %A Brand, Cornelius %A Sagraloff, Michael %+ External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T On the Complexity of Solving Zero-Dimensional Polynomial Systems via Projection : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002B-02B2-E %R 10.1145/2930889.2930934 %D 2016 %B 41st International Symposium on Symbolic and Algebraic Computation %Z date of event: 2016-06-19 - 2016-06-22 %C Waterloo, Canada %B ISSAC 2016 %E Rosenkranz, Markus %P 151 - 158 %I ACM %@ 978-1-4503-4380-0
[254]
U. Brandes, E. Holm, and A. Karrenbauer, “Cliques in Regular Graphs and the Core-Periphery Problem in Social Networks,” in Combinatorial Optimization and Applications (COCOA 2016), Hong Kong, China, 2016.
Export
BibTeX
@inproceedings{BHK2016, TITLE = {Cliques in Regular Graphs and the Core-Periphery Problem in Social Networks}, AUTHOR = {Brandes, Ulrik and Holm, Eugenia and Karrenbauer, Andreas}, LANGUAGE = {eng}, ISBN = {978-3-319-48748-9}, DOI = {10.1007/978-3-319-48749-6_13}, PUBLISHER = {Springer}, YEAR = {2016}, DATE = {2016}, BOOKTITLE = {Combinatorial Optimization and Applications (COCOA 2016)}, EDITOR = {Chan, T-H. Hubert and Li, Minming and Wang, Lusheng}, PAGES = {175--186}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {10043}, ADDRESS = {Hong Kong, China}, }
Endnote
%0 Conference Proceedings %A Brandes, Ulrik %A Holm, Eugenia %A Karrenbauer, Andreas %+ External Organizations External Organizations Algorithms and Complexity, MPI for Informatics, Max Planck Society %T Cliques in Regular Graphs and the Core-Periphery Problem in Social Networks : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-832D-8 %R 10.1007/978-3-319-48749-6_13 %D 2016 %B 10th Annual International Conference on Combinatorial Optimization and Applications %Z date of event: 2016-12-16 - 2016-12-18 %C Hong Kong, China %B Combinatorial Optimization and Applications %E Chan, T-H. Hubert; Li, Minming; Wang, Lusheng %P 175 - 186 %I Springer %@ 978-3-319-48748-9 %B Lecture Notes in Computer Science %N 10043
[255]
K. Bringmann, A. Grønlund, and K. G. Larsen, “A Dichotomy for Regular Expression Membership Testing,” 2016. [Online]. Available: http://arxiv.org/abs/1611.00918. (arXiv: 1611.00918)
Abstract
We study regular expression membership testing: Given a regular expression of size $m$ and a string of size $n$, decide whether the string is in the language described by the regular expression. Its classic $O(nm)$ algorithm is one of the big success stories of the 70s, which allowed pattern matching to develop into the standard tool that it is today. Many special cases of pattern matching have been studied that can be solved faster than in quadratic time. However, a systematic study of tractable cases was made possible only recently, with the first conditional lower bounds reported by Backurs and Indyk [FOCS'16]. Restricted to any "type" of homogeneous regular expressions of depth 2 or 3, they either presented a near-linear time algorithm or a quadratic conditional lower bound, with one exception known as the Word Break problem. In this paper we complete their work as follows: 1) We present two almost-linear time algorithms that generalize all known almost-linear time algorithms for special cases of regular expression membership testing. 2) We classify all types, except for the Word Break problem, into almost-linear time or quadratic time assuming the Strong Exponential Time Hypothesis. This extends the classification from depth 2 and 3 to any constant depth. 3) For the Word Break problem we give an improved $\tilde{O}(n m^{1/3} + m)$ algorithm. Surprisingly, we also prove a matching conditional lower bound for combinatorial algorithms. This establishes Word Break as the only intermediate problem. In total, we prove matching upper and lower bounds for any type of bounded-depth homogeneous regular expressions, which yields a full dichotomy for regular expression membership testing.
Export
BibTeX
@online{BringmannGL16, TITLE = {A Dichotomy for Regular Expression Membership Testing}, AUTHOR = {Bringmann, Karl and Gr{\o}nlund, Allan and Larsen, Kasper Green}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1611.00918}, EPRINT = {1611.00918}, EPRINTTYPE = {arXiv}, YEAR = {2016}, ABSTRACT = {We study regular expression membership testing: Given a regular expression of size $m$ and a string of size $n$, decide whether the string is in the language described by the regular expression. Its classic $O(nm)$ algorithm is one of the big success stories of the 70s, which allowed pattern matching to develop into the standard tool that it is today. Many special cases of pattern matching have been studied that can be solved faster than in quadratic time. However, a systematic study of tractable cases was made possible only recently, with the first conditional lower bounds reported by Backurs and Indyk [FOCS'16]. Restricted to any "type" of homogeneous regular expressions of depth 2 or 3, they either presented a near-linear time algorithm or a quadratic conditional lower bound, with one exception known as the Word Break problem. In this paper we complete their work as follows: 1) We present two almost-linear time algorithms that generalize all known almost-linear time algorithms for special cases of regular expression membership testing. 2) We classify all types, except for the Word Break problem, into almost-linear time or quadratic time assuming the Strong Exponential Time Hypothesis. This extends the classification from depth 2 and 3 to any constant depth. 3) For the Word Break problem we give an improved $\tilde{O}(n m^{1/3} + m)$ algorithm. Surprisingly, we also prove a matching conditional lower bound for combinatorial algorithms. This establishes Word Break as the only intermediate problem. In total, we prove matching upper and lower bounds for any type of bounded-depth homogeneous regular expressions, which yields a full dichotomy for regular expression membership testing.}, }
Endnote
%0 Report %A Bringmann, Karl %A Gr&#248;nlund, Allan %A Larsen, Kasper Green %+ Algorithms and Complexity, MPI for Informatics, Max Planck Society External Organizations External Organizations %T A Dichotomy for Regular Expression Membership Testing : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-5300-F %U http://arxiv.org/abs/1611.00918 %D 2016 %X We study regular expression membership testing: Given a regular expression of size $m$ and a string of size $n$, decide whether the string is in the language described by the regular expression. Its classic $O(nm)$ algorithm is one of the big success stories of the 70s, which allowed pattern matching to develop into the standard tool that it is today. Many special cases of pattern matching have been studied that can be solved faster than in quadratic time. However, a systematic study of tractable cases was made possible only recently, with the first conditional lower bounds reported by Backurs and Indyk [FOCS'16]. Restricted to any "type" of homogeneous regular expressions of depth 2 or 3, they either presented a near-linear time algorithm or a quadratic conditional lower bound, with one exception known as the Word Break problem. In this paper we complete their work as follows: 1) We present two almost-linear time algorithms that generalize all known almost-linear time algorithms for special cases of regular expression membership testing. 2) We classify all types, except for the Word Break problem, into almost-linear time or quadratic time assuming the Strong Exponential Time Hypothesis. This extends the classification from depth 2 and 3 to any constant depth. 3) For the Word Break problem we give an improved $\tilde{O}(n m^{1/3} + m)$ algorithm. Surprisingly, we also prove a matching conditional lower bound for combinatorial algorithms. This establishes Word Break as the only intermediate problem. In total, we prove matching upper and lower bounds for any type of bounded-depth homogeneous regular expressions, which yields a full dichotomy for regular expression membership testing. %K Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Computational Complexity, cs.CC
[256]
K. Bringmann, R. Keusch, and J. Lengler, “Average Distance in a General Class of Scale-Free Networks with Underlying Geometry,” 2016. [Online]. Available: http://arxiv.org/abs/1602.05712. (arXiv: 1602.05712)